A comprehensive survey on pretrained foundation models: A history from bert to chatgpt

C Zhou, Q Li, C Li, J Yu, Y Liu, G Wang… - International Journal of …, 2024 - Springer
Abstract Pretrained Foundation Models (PFMs) are regarded as the foundation for various
downstream tasks across different data modalities. A PFM (eg, BERT, ChatGPT, GPT-4) is …

A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends

J Gui, T Chen, J Zhang, Q Cao, Z Sun… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep supervised learning algorithms typically require a large volume of labeled data to
achieve satisfactory performance. However, the process of collecting and labeling such data …

Dinov2: Learning robust visual features without supervision

M Oquab, T Darcet, T Moutakanni, H Vo… - arxiv preprint arxiv …, 2023 - arxiv.org
The recent breakthroughs in natural language processing for model pretraining on large
quantities of data have opened the way for similar foundation models in computer vision …

Emerging properties in self-supervised vision transformers

M Caron, H Touvron, I Misra, H Jégou… - Proceedings of the …, 2021 - openaccess.thecvf.com
In this paper, we question if self-supervised learning provides new properties to Vision
Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the …

Barlow twins: Self-supervised learning via redundancy reduction

J Zbontar, L **g, I Misra, Y LeCun… - … on machine learning, 2021 - proceedings.mlr.press
Self-supervised learning (SSL) is rapidly closing the gap with supervised methods on large
computer vision benchmarks. A successful approach to SSL is to learn embeddings which …

A survey on contrastive self-supervised learning

A Jaiswal, AR Babu, MZ Zadeh, D Banerjee… - Technologies, 2020 - mdpi.com
Self-supervised learning has gained popularity because of its ability to avoid the cost of
annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as …

A unifying review of deep and shallow anomaly detection

L Ruff, JR Kauffmann, RA Vandermeulen… - Proceedings of the …, 2021 - ieeexplore.ieee.org
Deep learning approaches to anomaly detection (AD) have recently improved the state of
the art in detection performance on complex data sets, such as large collections of images or …

Unsupervised learning of visual features by contrasting cluster assignments

M Caron, I Misra, J Mairal, P Goyal… - Advances in neural …, 2020 - proceedings.neurips.cc
Unsupervised image representations have significantly reduced the gap with supervised
pretraining, notably with the recent achievements of contrastive learning methods. These …

Understanding contrastive representation learning through alignment and uniformity on the hypersphere

T Wang, P Isola - International conference on machine …, 2020 - proceedings.mlr.press
Contrastive representation learning has been outstandingly successful in practice. In this
work, we identify two key properties related to the contrastive loss:(1) alignment (closeness) …

Vertical federated learning: Concepts, advances, and challenges

Y Liu, Y Kang, T Zou, Y Pu, Y He, X Ye… - … on Knowledge and …, 2024 - ieeexplore.ieee.org
Vertical Federated Learning (VFL) is a federated learning setting where multiple parties with
different features about the same set of users jointly train machine learning models without …