[HTML][HTML] Battery safety: Machine learning-based prognostics
Lithium-ion batteries play a pivotal role in a wide range of applications, from electronic
devices to large-scale electrified transportation systems and grid-scale energy storage …
devices to large-scale electrified transportation systems and grid-scale energy storage …
A review of deep learning for video captioning
Video captioning (VC) is a fast-moving, cross-disciplinary area of research that comprises
contributions from domains such as computer vision, natural language processing …
contributions from domains such as computer vision, natural language processing …
Dinov2: Learning robust visual features without supervision
The recent breakthroughs in natural language processing for model pretraining on large
quantities of data have opened the way for similar foundation models in computer vision …
quantities of data have opened the way for similar foundation models in computer vision …
Self-supervised contrastive pre-training for time series via time-frequency consistency
Pre-training on time series poses a unique challenge due to the potential mismatch between
pre-training and target domains, such as shifts in temporal dynamics, fast-evolving trends …
pre-training and target domains, such as shifts in temporal dynamics, fast-evolving trends …
Context autoencoder for self-supervised representation learning
We present a novel masked image modeling (MIM) approach, context autoencoder (CAE),
for self-supervised representation pretraining. We pretrain an encoder by making predictions …
for self-supervised representation pretraining. We pretrain an encoder by making predictions …
Emerging properties in self-supervised vision transformers
In this paper, we question if self-supervised learning provides new properties to Vision
Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the …
Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the …
With a little help from my friends: Nearest-neighbor contrastive learning of visual representations
Self-supervised learning algorithms based on instance discrimination train encoders to be
invariant to pre-defined transformations of the same instance. While most methods treat …
invariant to pre-defined transformations of the same instance. While most methods treat …
Exploring simple siamese representation learning
Siamese networks have become a common structure in various recent models for
unsupervised visual representation learning. These models maximize the similarity between …
unsupervised visual representation learning. These models maximize the similarity between …
Understanding the behaviour of contrastive loss
Unsupervised contrastive learning has achieved outstanding success, while the mechanism
of contrastive loss has been less studied. In this paper, we concentrate on the understanding …
of contrastive loss has been less studied. In this paper, we concentrate on the understanding …
Vicreg: Variance-invariance-covariance regularization for self-supervised learning
Recent self-supervised methods for image representation learning are based on maximizing
the agreement between embedding vectors from different views of the same image. A trivial …
the agreement between embedding vectors from different views of the same image. A trivial …