Dinov2: Learning robust visual features without supervision
The recent breakthroughs in natural language processing for model pretraining on large
quantities of data have opened the way for similar foundation models in computer vision …
quantities of data have opened the way for similar foundation models in computer vision …
[HTML][HTML] Battery safety: Machine learning-based prognostics
Lithium-ion batteries play a pivotal role in a wide range of applications, from electronic
devices to large-scale electrified transportation systems and grid-scale energy storage …
devices to large-scale electrified transportation systems and grid-scale energy storage …
Emerging properties in self-supervised vision transformers
In this paper, we question if self-supervised learning provides new properties to Vision
Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the …
Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the …
Exploring simple siamese representation learning
Siamese networks have become a common structure in various recent models for
unsupervised visual representation learning. These models maximize the similarity between …
unsupervised visual representation learning. These models maximize the similarity between …
Momentum contrast for unsupervised visual representation learning
Abstract We present Momentum Contrast (MoCo) for unsupervised visual representation
learning. From a perspective on contrastive learning as dictionary look-up, we build a …
learning. From a perspective on contrastive learning as dictionary look-up, we build a …
Unsupervised learning of visual features by contrasting cluster assignments
Unsupervised image representations have significantly reduced the gap with supervised
pretraining, notably with the recent achievements of contrastive learning methods. These …
pretraining, notably with the recent achievements of contrastive learning methods. These …
Vicreg: Variance-invariance-covariance regularization for self-supervised learning
Recent self-supervised methods for image representation learning are based on maximizing
the agreement between embedding vectors from different views of the same image. A trivial …
the agreement between embedding vectors from different views of the same image. A trivial …
Understanding the behaviour of contrastive loss
Unsupervised contrastive learning has achieved outstanding success, while the mechanism
of contrastive loss has been less studied. In this paper, we concentrate on the understanding …
of contrastive loss has been less studied. In this paper, we concentrate on the understanding …
Self-supervised learning of pretext-invariant representations
The goal of self-supervised learning from images is to construct image representations that
are semantically meaningful via pretext tasks that do not require semantic annotations. Many …
are semantically meaningful via pretext tasks that do not require semantic annotations. Many …
Hard negative mixing for contrastive learning
Contrastive learning has become a key component of self-supervised learning approaches
for computer vision. By learning to embed two augmented versions of the same image close …
for computer vision. By learning to embed two augmented versions of the same image close …