Dinov2: Learning robust visual features without supervision

M Oquab, T Darcet, T Moutakanni, H Vo… - arxiv preprint arxiv …, 2023 - arxiv.org
The recent breakthroughs in natural language processing for model pretraining on large
quantities of data have opened the way for similar foundation models in computer vision …

[HTML][HTML] Battery safety: Machine learning-based prognostics

J Zhao, X Feng, Q Pang, M Fowler, Y Lian… - Progress in Energy and …, 2024 - Elsevier
Lithium-ion batteries play a pivotal role in a wide range of applications, from electronic
devices to large-scale electrified transportation systems and grid-scale energy storage …

Emerging properties in self-supervised vision transformers

M Caron, H Touvron, I Misra, H Jégou… - Proceedings of the …, 2021 - openaccess.thecvf.com
In this paper, we question if self-supervised learning provides new properties to Vision
Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the …

Exploring simple siamese representation learning

X Chen, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Siamese networks have become a common structure in various recent models for
unsupervised visual representation learning. These models maximize the similarity between …

Momentum contrast for unsupervised visual representation learning

K He, H Fan, Y Wu, S **e… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
Abstract We present Momentum Contrast (MoCo) for unsupervised visual representation
learning. From a perspective on contrastive learning as dictionary look-up, we build a …

Unsupervised learning of visual features by contrasting cluster assignments

M Caron, I Misra, J Mairal, P Goyal… - Advances in neural …, 2020 - proceedings.neurips.cc
Unsupervised image representations have significantly reduced the gap with supervised
pretraining, notably with the recent achievements of contrastive learning methods. These …

Vicreg: Variance-invariance-covariance regularization for self-supervised learning

A Bardes, J Ponce, Y LeCun - arxiv preprint arxiv:2105.04906, 2021 - arxiv.org
Recent self-supervised methods for image representation learning are based on maximizing
the agreement between embedding vectors from different views of the same image. A trivial …

Understanding the behaviour of contrastive loss

F Wang, H Liu - Proceedings of the IEEE/CVF conference …, 2021 - openaccess.thecvf.com
Unsupervised contrastive learning has achieved outstanding success, while the mechanism
of contrastive loss has been less studied. In this paper, we concentrate on the understanding …

Self-supervised learning of pretext-invariant representations

I Misra, L Maaten - … of the IEEE/CVF conference on …, 2020 - openaccess.thecvf.com
The goal of self-supervised learning from images is to construct image representations that
are semantically meaningful via pretext tasks that do not require semantic annotations. Many …

Hard negative mixing for contrastive learning

Y Kalantidis, MB Sariyildiz, N Pion… - Advances in neural …, 2020 - proceedings.neurips.cc
Contrastive learning has become a key component of self-supervised learning approaches
for computer vision. By learning to embed two augmented versions of the same image close …