Nonlinear independent component analysis for principled disentanglement in unsupervised deep learning
A central problem in unsupervised deep learning is how to find useful representations of
high-dimensional data, sometimes called" disentanglement." Most approaches are heuristic …
high-dimensional data, sometimes called" disentanglement." Most approaches are heuristic …
Decoding the brain: From neural representations to mechanistic models
A central principle in neuroscience is that neurons within the brain act in concert to produce
perception, cognition, and adaptive behavior. Neurons are organized into specialized brain …
perception, cognition, and adaptive behavior. Neurons are organized into specialized brain …
Learnable latent embeddings for joint behavioural and neural analysis
Map** behavioural actions to neural activity is a fundamental goal of neuroscience. As our
ability to record large neural and behavioural data increases, there is growing interest in …
ability to record large neural and behavioural data increases, there is growing interest in …
Self-supervised learning with data augmentations provably isolates content from style
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …
domains. A common practice is to perform data augmentation via hand-crafted …
Interventional causal representation learning
Causal representation learning seeks to extract high-level latent factors from low-level
sensory data. Most existing methods rely on observational data and structural assumptions …
sensory data. Most existing methods rely on observational data and structural assumptions …
Contrastive learning inverts the data generating process
Contrastive learning has recently seen tremendous success in self-supervised learning. So
far, however, it is largely unclear why the learned representations generalize so effectively to …
far, however, it is largely unclear why the learned representations generalize so effectively to …
Challenging common assumptions in the unsupervised learning of disentangled representations
The key idea behind the unsupervised learning of disentangled representations is that real-
world data is generated by a few explanatory factors of variation which can be recovered by …
world data is generated by a few explanatory factors of variation which can be recovered by …
Variational autoencoders and nonlinear ica: A unifying framework
The framework of variational autoencoders allows us to efficiently learn deep latent-variable
models, such that the model's marginal distribution over observed variables fits the data …
models, such that the model's marginal distribution over observed variables fits the data …
Identifiability guarantees for causal disentanglement from soft interventions
Causal disentanglement aims to uncover a representation of data using latent variables that
are interrelated through a causal model. Such a representation is identifiable if the latent …
are interrelated through a causal model. Such a representation is identifiable if the latent …
The emergence of reproducibility and consistency in diffusion models
In this work, we investigate an intriguing and prevalent phenomenon of diffusion models
which we term as" consistent model reproducibility'': given the same starting noise input and …
which we term as" consistent model reproducibility'': given the same starting noise input and …