Self-supervised learning with data augmentations provably isolates content from style

J Von Kügelgen, Y Sharma, L Gresele… - Advances in neural …, 2021 - proceedings.neurips.cc
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …

Generalize then adapt: Source-free domain adaptive semantic segmentation

JN Kundu, A Kulkarni, A Singh… - Proceedings of the …, 2021 - openaccess.thecvf.com
Unsupervised domain adaptation (DA) has gained substantial interest in semantic
segmentation. However, almost all prior arts assume concurrent access to both labeled …

The effects of regularization and data augmentation are class dependent

R Balestriero, L Bottou… - Advances in Neural …, 2022 - proceedings.neurips.cc
Regularization is a fundamental technique to prevent over-fitting and to improve
generalization performances by constraining a model's complexity. Current Deep Networks …

Causality-inspired single-source domain generalization for medical image segmentation

C Ouyang, C Chen, S Li, Z Li, C Qin… - … on Medical Imaging, 2022 - ieeexplore.ieee.org
Deep learning models usually suffer from the domain shift issue, where models trained on
one source domain do not generalize well to other unseen domains. In this work, we …

Gradient matching for domain generalization

Y Shi, J Seely, PHS Torr, N Siddharth… - ar** trustworthy ai systems
N Ganguly, D Fazlija, M Badar, M Fisichella… - arxiv preprint arxiv …, 2023 - arxiv.org
State-of-the-art AI models largely lack an understanding of the cause-effect relationship that
governs human understanding of the real world. Consequently, these models do not …

Out-of-domain robustness via targeted augmentations

I Gao, S Sagawa, PW Koh… - International …, 2023 - proceedings.mlr.press
Abstract Models trained on one set of domains often suffer performance drops on unseen
domains, eg, when wildlife monitoring models are deployed in new camera locations. In this …

A causal lens for controllable text generation

Z Hu, LE Li - Advances in Neural Information Processing …, 2021 - proceedings.neurips.cc
Controllable text generation concerns two fundamental tasks of wide applications, namely
generating text of given attributes (ie, attribute-conditional generation), and minimally editing …

Harnessing out-of-distribution examples via augmenting content and style

Z Huang, X **a, L Shen, B Han, M Gong… - arxiv preprint arxiv …, 2022 - arxiv.org
Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples, and such a
problem has drawn much attention. However, current methods lack a full understanding of …

Understanding hessian alignment for domain generalization

S Hemati, G Zhang, A Estiri… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Abstract Out-of-distribution (OOD) generalization is a critical ability for deep learning models
in many real-world scenarios including healthcare and autonomous vehicles. Recently …