Progressive random convolutions for single domain generalization

S Choi, D Das, S Choi, S Yang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Single domain generalization aims to train a generalizable model with only one source
domain to perform well on arbitrary unseen target domains. Image augmentation based on …

No free lunch in self supervised representation learning

I Bendidi, A Bardes, E Cohen, A Lamiable… - arxiv preprint arxiv …, 2023 - arxiv.org
Self-supervised representation learning in computer vision relies heavily on hand-crafted
image transformations to learn meaningful and invariant features. However few extensive …

Neural transformation network to generate diverse views for contrastive learning

T Kim, D Das, S Choi, M Jeong… - Proceedings of the …, 2023 - openaccess.thecvf.com
Recent unsupervised representation learning methods rely heavily on various
transformations to generate distinctive views of given samples. Transformations for these …

Optimizing transformations for contrastive learning in a differentiable framework

C Ruppli, P Gori, R Ardon, I Bloch - … Learning with Limited and Noisy Data, 2022 - Springer
Current contrastive learning methods use random transformations sampled from a large list
of transformations, with fixed hyper-parameters, to learn invariance from an unannotated …

Exploring self-supervised learning biases for microscopy image representation

I Bendidi, A Bardes, E Cohen, A Lamiable… - Biological …, 2024 - cambridge.org
Self-supervised representation learning (SSRL) in computer vision relies heavily on simple
image transformations such as random rotation, crops, or illumination to learn meaningful …

Sampling Informative Positives Pairs in Contrastive Learning

M Weber, P Bachman - 2023 International Conference on …, 2023 - ieeexplore.ieee.org
Contrastive Learning is a paradigm for learning representation functions that recover useful
similarity structure in a dataset based on samples of positive (similar) and negative …