A comprehensive survey on contrastive learning
H Hu, X Wang, Y Zhang, Q Chen, Q Guan - Neurocomputing, 2024 - Elsevier
Contrastive Learning is self-supervised representation learning by training a model to
differentiate between similar and dissimilar samples. It has been shown to be effective and …
differentiate between similar and dissimilar samples. It has been shown to be effective and …
Rethinking federated learning with domain shift: A prototype view
Federated learning shows a bright promise as a privacy-preserving collaborative learning
technique. However, prevalent solutions mainly focus on all private data sampled from the …
technique. However, prevalent solutions mainly focus on all private data sampled from the …
Hyperbolic contrastive learning for visual representations beyond objects
Although self-/un-supervised methods have led to rapid progress in visual representation
learning, these methods generally treat objects and scenes using the same lens. In this …
learning, these methods generally treat objects and scenes using the same lens. In this …
Cocoa: Cross modality contrastive learning for sensor data
Self-Supervised Learning (SSL) is a new paradigm for learning discriminative
representations without labeled data, and has reached comparable or even state-of-the-art …
representations without labeled data, and has reached comparable or even state-of-the-art …
Does Negative Sampling Matter? A Review with Insights into its Theory and Applications
Negative sampling has swiftly risen to prominence as a focal point of research, with wide-
ranging applications spanning machine learning, computer vision, natural language …
ranging applications spanning machine learning, computer vision, natural language …
Contrastive adapters for foundation model group robustness
While large pretrained foundation models (FMs) have shown remarkable zero-shot
classification robustness to dataset-level distribution shifts, their robustness to subpopulation …
classification robustness to dataset-level distribution shifts, their robustness to subpopulation …
Self-supervised learning with an information maximization criterion
Self-supervised learning allows AI systems to learn effective representations from large
amounts of data using tasks that do not require costly labeling. Mode collapse, ie, the model …
amounts of data using tasks that do not require costly labeling. Mode collapse, ie, the model …
Why do we need large batchsizes in contrastive learning? a gradient-bias perspective
Contrastive learning (CL) has been the de facto technique for self-supervised representation
learning (SSL), with impressive empirical success such as multi-modal representation …
learning (SSL), with impressive empirical success such as multi-modal representation …
Contrastive learning for unsupervised domain adaptation of time series
Unsupervised domain adaptation (UDA) aims at learning a machine learning model using a
labeled source domain that performs well on a similar yet different, unlabeled target domain …
labeled source domain that performs well on a similar yet different, unlabeled target domain …
A simple, efficient and scalable contrastive masked autoencoder for learning visual representations
We introduce CAN, a simple, efficient and scalable method for self-supervised learning of
visual representations. Our framework is a minimal and conceptually clean synthesis of (C) …
visual representations. Our framework is a minimal and conceptually clean synthesis of (C) …