Self-supervised learning with data augmentations provably isolates content from style
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …
domains. A common practice is to perform data augmentation via hand-crafted …
Generalize then adapt: Source-free domain adaptive semantic segmentation
Unsupervised domain adaptation (DA) has gained substantial interest in semantic
segmentation. However, almost all prior arts assume concurrent access to both labeled …
segmentation. However, almost all prior arts assume concurrent access to both labeled …
The effects of regularization and data augmentation are class dependent
Regularization is a fundamental technique to prevent over-fitting and to improve
generalization performances by constraining a model's complexity. Current Deep Networks …
generalization performances by constraining a model's complexity. Current Deep Networks …
Causality-inspired single-source domain generalization for medical image segmentation
Deep learning models usually suffer from the domain shift issue, where models trained on
one source domain do not generalize well to other unseen domains. In this work, we …
one source domain do not generalize well to other unseen domains. In this work, we …
Gradient matching for domain generalization
Y Shi, J Seely, PHS Torr, N Siddharth… - ar** trustworthy ai systems
State-of-the-art AI models largely lack an understanding of the cause-effect relationship that
governs human understanding of the real world. Consequently, these models do not …
governs human understanding of the real world. Consequently, these models do not …
Out-of-domain robustness via targeted augmentations
Abstract Models trained on one set of domains often suffer performance drops on unseen
domains, eg, when wildlife monitoring models are deployed in new camera locations. In this …
domains, eg, when wildlife monitoring models are deployed in new camera locations. In this …
A causal lens for controllable text generation
Controllable text generation concerns two fundamental tasks of wide applications, namely
generating text of given attributes (ie, attribute-conditional generation), and minimally editing …
generating text of given attributes (ie, attribute-conditional generation), and minimally editing …
Harnessing out-of-distribution examples via augmenting content and style
Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples, and such a
problem has drawn much attention. However, current methods lack a full understanding of …
problem has drawn much attention. However, current methods lack a full understanding of …
Understanding hessian alignment for domain generalization
Abstract Out-of-distribution (OOD) generalization is a critical ability for deep learning models
in many real-world scenarios including healthcare and autonomous vehicles. Recently …
in many real-world scenarios including healthcare and autonomous vehicles. Recently …