A unifying review of deep and shallow anomaly detection
Deep learning approaches to anomaly detection (AD) have recently improved the state of
the art in detection performance on complex data sets, such as large collections of images or …
the art in detection performance on complex data sets, such as large collections of images or …
Toward causal representation learning
The two fields of machine learning and graphical causality arose and are developed
separately. However, there is, now, cross-pollination and increasing interest in both fields to …
separately. However, there is, now, cross-pollination and increasing interest in both fields to …
Causality inspired representation learning for domain generalization
Abstract Domain generalization (DG) is essentially an out-of-distribution problem, aiming to
generalize the knowledge learned from multiple source domains to an unseen target …
generalize the knowledge learned from multiple source domains to an unseen target …
Self-supervised learning with data augmentations provably isolates content from style
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …
domains. A common practice is to perform data augmentation via hand-crafted …
Weakly supervised causal representation learning
Learning high-level causal representations together with a causal model from unstructured
low-level data such as pixels is impossible from observational data alone. We prove under …
low-level data such as pixels is impossible from observational data alone. We prove under …
Nonparametric identifiability of causal representations from unknown interventions
We study causal representation learning, the task of inferring latent causal variables and
their causal relations from high-dimensional functions (“mixtures”) of the variables. Prior …
their causal relations from high-dimensional functions (“mixtures”) of the variables. Prior …
Weakly-supervised disentanglement without compromises
Intelligent agents should be able to learn useful representations by observing changes in
their environment. We model such observations as pairs of non-iid images sharing at least …
their environment. We model such observations as pairs of non-iid images sharing at least …
Towards nonlinear disentanglement in natural data with temporal sparse coding
We construct an unsupervised learning model that achieves nonlinear disentanglement of
underlying factors of variation in naturalistic videos. Previous work suggests that …
underlying factors of variation in naturalistic videos. Previous work suggests that …
Learning causal semantic representation for out-of-distribution prediction
Conventional supervised learning methods, especially deep ones, are found to be sensitive
to out-of-distribution (OOD) examples, largely because the learned representation mixes the …
to out-of-distribution (OOD) examples, largely because the learned representation mixes the …
Not all neuro-symbolic concepts are created equal: Analysis and mitigation of reasoning shortcuts
Abstract Neuro-Symbolic (NeSy) predictive models hold the promise of improved
compliance with given constraints, systematic generalization, and interpretability, as they …
compliance with given constraints, systematic generalization, and interpretability, as they …