[HTML][HTML] Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis

B Lambert, F Forbes, S Doyle, H Dehaene… - Artificial Intelligence in …, 2024 - Elsevier
The full acceptance of Deep Learning (DL) models in the clinical field is rather low with
respect to the quantity of high-performing solutions reported in the literature. End users are …

Generalizing to unseen domains: A survey on domain generalization

J Wang, C Lan, C Liu, Y Ouyang, T Qin… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …

Causal inference in natural language processing: Estimation, prediction, interpretation and beyond

A Feder, KA Keith, E Manzoor, R Pryzant… - Transactions of the …, 2022 - direct.mit.edu
A fundamental goal of scientific research is to learn about causal relationships. However,
despite its critical role in the life and social sciences, causality has not had the same …

Fishr: Invariant gradient variances for out-of-distribution generalization

A Rame, C Dancette, M Cord - International Conference on …, 2022 - proceedings.mlr.press
Learning robust models that generalize well under changes in the data distribution is critical
for real-world applications. To this end, there has been a growing surge of interest to learn …

Nonparametric identifiability of causal representations from unknown interventions

J von Kügelgen, M Besserve… - Advances in …, 2023 - proceedings.neurips.cc
We study causal representation learning, the task of inferring latent causal variables and
their causal relations from high-dimensional functions (“mixtures”) of the variables. Prior …

Warm: On the benefits of weight averaged reward models

A Ramé, N Vieillard, L Hussenot, R Dadashi… - arxiv preprint arxiv …, 2024 - arxiv.org
Aligning large language models (LLMs) with human preferences through reinforcement
learning (RLHF) can lead to reward hacking, where LLMs exploit failures in the reward …

A survey on evaluation of out-of-distribution generalization

H Yu, J Liu, X Zhang, J Wu, P Cui - arxiv preprint arxiv:2403.01874, 2024 - arxiv.org
Machine learning models, while progressively advanced, rely heavily on the IID assumption,
which is often unfulfilled in practice due to inevitable distribution shifts. This renders them …

On the paradox of learning to reason from data

H Zhang, LH Li, T Meng, KW Chang… - arxiv preprint arxiv …, 2022 - arxiv.org
Logical reasoning is needed in a wide range of NLP tasks. Can a BERT model be trained
end-to-end to solve logical reasoning problems presented in natural language? We attempt …

Uncertainty quantification with pre-trained language models: A large-scale empirical analysis

Y **ao, PP Liang, U Bhatt, W Neiswanger… - arxiv preprint arxiv …, 2022 - arxiv.org
Pre-trained language models (PLMs) have gained increasing popularity due to their
compelling prediction performance in diverse natural language processing (NLP) tasks …

Assaying out-of-distribution generalization in transfer learning

F Wenzel, A Dittadi, P Gehler… - Advances in …, 2022 - proceedings.neurips.cc
Since out-of-distribution generalization is a generally ill-posed problem, various proxy
targets (eg, calibration, adversarial robustness, algorithmic corruptions, invariance across …