D'ya like dags? a survey on structure learning and causal discovery

MJ Vowels, NC Camgoz, R Bowden - ACM Computing Surveys, 2022 - dl.acm.org
Causal reasoning is a crucial part of science and human intelligence. In order to discover
causal relationships from data, we need structure discovery methods. We provide a review …

Recent advances in autoencoder-based representation learning

M Tschannen, O Bachem, M Lucic - arxiv preprint arxiv:1812.05069, 2018 - arxiv.org
Learning useful representations with little or no supervision is a key challenge in artificial
intelligence. We provide an in-depth review of recent advances in representation learning …

Toward transparent ai: A survey on interpreting the inner structures of deep neural networks

T Räuker, A Ho, S Casper… - 2023 ieee conference …, 2023 - ieeexplore.ieee.org
The last decade of machine learning has seen drastic increases in scale and capabilities.
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …

Self-supervised learning with data augmentations provably isolates content from style

J Von Kügelgen, Y Sharma, L Gresele… - Advances in neural …, 2021 - proceedings.neurips.cc
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …

Disentangled representation learning

X Wang, H Chen, Z Wu, W Zhu - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Disentangled Representation Learning (DRL) aims to learn a model capable of identifying
and disentangling the underlying factors hidden in the observable data in representation …

Factorizing knowledge in neural networks

X Yang, J Ye, X Wang - European Conference on Computer Vision, 2022 - Springer
In this paper, we explore a novel and ambitious knowledge-transfer task, termed Knowledge
Factorization (KF). The core idea of KF lies in the modularization and assemblability of …

Challenging common assumptions in the unsupervised learning of disentangled representations

F Locatello, S Bauer, M Lucic… - international …, 2019 - proceedings.mlr.press
The key idea behind the unsupervised learning of disentangled representations is that real-
world data is generated by a few explanatory factors of variation which can be recovered by …

One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques

V Arya, RKE Bellamy, PY Chen, A Dhurandhar… - arxiv preprint arxiv …, 2019 - arxiv.org
As artificial intelligence and machine learning algorithms make further inroads into society,
calls are increasing from multiple stakeholders for these algorithms to explain their outputs …

Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability

AJ London - Hastings Center Report, 2019 - Wiley Online Library
Although decision‐making algorithms are not new to medicine, the availability of vast stores
of medical data, gains in computing power, and breakthroughs in machine learning are …

Disentangling by factorising

H Kim, A Mnih - International conference on machine …, 2018 - proceedings.mlr.press
We define and address the problem of unsupervised learning of disentangled
representations on data generated from independent factors of variation. We propose …