Nonlinear independent component analysis for principled disentanglement in unsupervised deep learning

A Hyvärinen, I Khemakhem, H Morioka - Patterns, 2023 - cell.com
A central problem in unsupervised deep learning is how to find useful representations of
high-dimensional data, sometimes called" disentanglement." Most approaches are heuristic …

Decoding the brain: From neural representations to mechanistic models

MW Mathis, AP Rotondo, EF Chang, AS Tolias… - Cell, 2024 - cell.com
A central principle in neuroscience is that neurons within the brain act in concert to produce
perception, cognition, and adaptive behavior. Neurons are organized into specialized brain …

Learnable latent embeddings for joint behavioural and neural analysis

S Schneider, JH Lee, MW Mathis - Nature, 2023 - nature.com
Map** behavioural actions to neural activity is a fundamental goal of neuroscience. As our
ability to record large neural and behavioural data increases, there is growing interest in …

Self-supervised learning with data augmentations provably isolates content from style

J Von Kügelgen, Y Sharma, L Gresele… - Advances in neural …, 2021 - proceedings.neurips.cc
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …

Interventional causal representation learning

K Ahuja, D Mahajan, Y Wang… - … conference on machine …, 2023 - proceedings.mlr.press
Causal representation learning seeks to extract high-level latent factors from low-level
sensory data. Most existing methods rely on observational data and structural assumptions …

Contrastive learning inverts the data generating process

RS Zimmermann, Y Sharma… - International …, 2021 - proceedings.mlr.press
Contrastive learning has recently seen tremendous success in self-supervised learning. So
far, however, it is largely unclear why the learned representations generalize so effectively to …

Challenging common assumptions in the unsupervised learning of disentangled representations

F Locatello, S Bauer, M Lucic… - international …, 2019 - proceedings.mlr.press
The key idea behind the unsupervised learning of disentangled representations is that real-
world data is generated by a few explanatory factors of variation which can be recovered by …

Variational autoencoders and nonlinear ica: A unifying framework

I Khemakhem, D Kingma, R Monti… - International …, 2020 - proceedings.mlr.press
The framework of variational autoencoders allows us to efficiently learn deep latent-variable
models, such that the model's marginal distribution over observed variables fits the data …

Identifiability guarantees for causal disentanglement from soft interventions

J Zhang, K Greenewald, C Squires… - Advances in …, 2024 - proceedings.neurips.cc
Causal disentanglement aims to uncover a representation of data using latent variables that
are interrelated through a causal model. Such a representation is identifiable if the latent …

The emergence of reproducibility and consistency in diffusion models

H Zhang, J Zhou, Y Lu, M Guo, P Wang… - Forty-first International …, 2023 - openreview.net
In this work, we investigate an intriguing and prevalent phenomenon of diffusion models
which we term as" consistent model reproducibility'': given the same starting noise input and …