Nonparametric identifiability of causal representations from unknown interventions

J von Kügelgen, M Besserve… - Advances in …, 2024 - proceedings.neurips.cc
We study causal representation learning, the task of inferring latent causal variables and
their causal relations from high-dimensional functions (“mixtures”) of the variables. Prior …

Causal representation learning through higher-level information extraction

F Silva, H P. Oliveira, T Pereira - ACM Computing Surveys, 2024 - dl.acm.org
The large gap between the generalization level of state-of-the-art machine learning and
human learning systems calls for the development of artificial intelligence (AI) models that …

Nonparametric partial disentanglement via mechanism sparsity: Sparse actions, interventions and sparse temporal dependencies

S Lachapelle, PR López, Y Sharma, K Everett… - arxiv preprint arxiv …, 2024 - arxiv.org
This work introduces a novel principle for disentanglement we call mechanism sparsity
regularization, which applies when the latent factors of interest depend sparsely on …

Towards interpretable Cryo-EM: disentangling latent spaces of molecular conformations

DA Klindt, A Hyvärinen, A Levy, N Miolane… - Frontiers in Molecular …, 2024 - frontiersin.org
Molecules are essential building blocks of life and their different conformations (ie, shapes)
crucially determine the functional role that they play in living organisms. Cryogenic Electron …

Score-based causal representation learning: Linear and general transformations

B Varıcı, E Acartürk, K Shanmugam, A Kumar… - arxiv preprint arxiv …, 2024 - arxiv.org
This paper addresses intervention-based causal representation learning (CRL) under a
general nonparametric latent causal model and an unknown transformation that maps the …

A sparsity principle for partially observable causal representation learning

D Xu, D Yao, S Lachapelle, P Taslakian… - arxiv preprint arxiv …, 2024 - arxiv.org
Causal representation learning aims at identifying high-level causal variables from
perceptual data. Most methods assume that all latent causal variables are captured in the …

Identifiability guarantees for causal disentanglement from purely observational data

R Welch, J Zhang, C Uhler - arxiv preprint arxiv:2410.23620, 2024 - arxiv.org
Causal disentanglement aims to learn about latent causal factors behind data, holding the
promise to augment existing representation learning methods in terms of interpretability and …

Interaction Asymmetry: A General Principle for Learning Composable Abstractions

J Brady, J von Kügelgen, S Lachapelle… - arxiv preprint arxiv …, 2024 - arxiv.org
Learning disentangled representations of concepts and re-composing them in unseen ways
is crucial for generalizing to out-of-domain situations. However, the underlying properties of …

Unsupervised discovery of the shared and private geometry in multi-view data

S Koukuntla, JB Julian, JC Kaminsky… - arxiv preprint arxiv …, 2024 - arxiv.org
Modern applications often leverage multiple views of a subject of study. Within
neuroscience, there is growing interest in large-scale simultaneous recordings across …

Revealing Multimodal Contrastive Representation Learning through Latent Partial Causal Models

Y Liu, Z Zhang, D Gong, B Huang, M Gong… - arxiv preprint arxiv …, 2024 - arxiv.org
Multimodal contrastive representation learning methods have proven successful across a
range of domains, partly due to their ability to generate meaningful shared representations …