A unifying review of deep and shallow anomaly detection

L Ruff, JR Kauffmann, RA Vandermeulen… - Proceedings of the …, 2021 - ieeexplore.ieee.org
Deep learning approaches to anomaly detection (AD) have recently improved the state of
the art in detection performance on complex data sets, such as large collections of images or …

Toward causal representation learning

B Schölkopf, F Locatello, S Bauer, NR Ke… - Proceedings of the …, 2021 - ieeexplore.ieee.org
The two fields of machine learning and graphical causality arose and are developed
separately. However, there is, now, cross-pollination and increasing interest in both fields to …

Causality inspired representation learning for domain generalization

F Lv, J Liang, S Li, B Zang, CH Liu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract Domain generalization (DG) is essentially an out-of-distribution problem, aiming to
generalize the knowledge learned from multiple source domains to an unseen target …

Self-supervised learning with data augmentations provably isolates content from style

J Von Kügelgen, Y Sharma, L Gresele… - Advances in neural …, 2021 - proceedings.neurips.cc
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …

Weakly supervised causal representation learning

J Brehmer, P De Haan, P Lippe… - Advances in Neural …, 2022 - proceedings.neurips.cc
Learning high-level causal representations together with a causal model from unstructured
low-level data such as pixels is impossible from observational data alone. We prove under …

Nonparametric identifiability of causal representations from unknown interventions

J von Kügelgen, M Besserve… - Advances in …, 2024 - proceedings.neurips.cc
We study causal representation learning, the task of inferring latent causal variables and
their causal relations from high-dimensional functions (“mixtures”) of the variables. Prior …

Weakly-supervised disentanglement without compromises

F Locatello, B Poole, G Rätsch… - International …, 2020 - proceedings.mlr.press
Intelligent agents should be able to learn useful representations by observing changes in
their environment. We model such observations as pairs of non-iid images sharing at least …

Towards nonlinear disentanglement in natural data with temporal sparse coding

D Klindt, L Schott, Y Sharma, I Ustyuzhaninov… - arxiv preprint arxiv …, 2020 - arxiv.org
We construct an unsupervised learning model that achieves nonlinear disentanglement of
underlying factors of variation in naturalistic videos. Previous work suggests that …

Learning causal semantic representation for out-of-distribution prediction

C Liu, X Sun, J Wang, H Tang, T Li… - Advances in …, 2021 - proceedings.neurips.cc
Conventional supervised learning methods, especially deep ones, are found to be sensitive
to out-of-distribution (OOD) examples, largely because the learned representation mixes the …

Not all neuro-symbolic concepts are created equal: Analysis and mitigation of reasoning shortcuts

E Marconato, S Teso, A Vergari… - Advances in Neural …, 2023 - proceedings.neurips.cc
Abstract Neuro-Symbolic (NeSy) predictive models hold the promise of improved
compliance with given constraints, systematic generalization, and interpretability, as they …