A comprehensive survey on contrastive learning

H Hu, X Wang, Y Zhang, Q Chen, Q Guan - Neurocomputing, 2024 - Elsevier
Contrastive Learning is self-supervised representation learning by training a model to
differentiate between similar and dissimilar samples. It has been shown to be effective and …

Rethinking federated learning with domain shift: A prototype view

W Huang, M Ye, Z Shi, H Li, B Du - 2023 IEEE/CVF Conference …, 2023 - ieeexplore.ieee.org
Federated learning shows a bright promise as a privacy-preserving collaborative learning
technique. However, prevalent solutions mainly focus on all private data sampled from the …

Hyperbolic contrastive learning for visual representations beyond objects

S Ge, S Mishra, S Kornblith, CL Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Although self-/un-supervised methods have led to rapid progress in visual representation
learning, these methods generally treat objects and scenes using the same lens. In this …

Cocoa: Cross modality contrastive learning for sensor data

S Deldari, H Xue, A Saeed, DV Smith… - Proceedings of the ACM …, 2022 - dl.acm.org
Self-Supervised Learning (SSL) is a new paradigm for learning discriminative
representations without labeled data, and has reached comparable or even state-of-the-art …

Does Negative Sampling Matter? A Review with Insights into its Theory and Applications

Z Yang, M Ding, T Huang, Y Cen, J Song… - … on Pattern Analysis …, 2024 - ieeexplore.ieee.org
Negative sampling has swiftly risen to prominence as a focal point of research, with wide-
ranging applications spanning machine learning, computer vision, natural language …

Contrastive adapters for foundation model group robustness

M Zhang, C Ré - Advances in Neural Information …, 2022 - proceedings.neurips.cc
While large pretrained foundation models (FMs) have shown remarkable zero-shot
classification robustness to dataset-level distribution shifts, their robustness to subpopulation …

Self-supervised learning with an information maximization criterion

S Ozsoy, S Hamdan, S Arik, D Yuret… - Advances in Neural …, 2022 - proceedings.neurips.cc
Self-supervised learning allows AI systems to learn effective representations from large
amounts of data using tasks that do not require costly labeling. Mode collapse, ie, the model …

Why do we need large batchsizes in contrastive learning? a gradient-bias perspective

C Chen, J Zhang, Y Xu, L Chen… - Advances in …, 2022 - proceedings.neurips.cc
Contrastive learning (CL) has been the de facto technique for self-supervised representation
learning (SSL), with impressive empirical success such as multi-modal representation …

Contrastive learning for unsupervised domain adaptation of time series

Y Ozyurt, S Feuerriegel, C Zhang - arxiv preprint arxiv:2206.06243, 2022 - arxiv.org
Unsupervised domain adaptation (UDA) aims at learning a machine learning model using a
labeled source domain that performs well on a similar yet different, unlabeled target domain …

A simple, efficient and scalable contrastive masked autoencoder for learning visual representations

S Mishra, J Robinson, H Chang, D Jacobs… - arxiv preprint arxiv …, 2022 - arxiv.org
We introduce CAN, a simple, efficient and scalable method for self-supervised learning of
visual representations. Our framework is a minimal and conceptually clean synthesis of (C) …