A comprehensive survey on pretrained foundation models: A history from bert to chatgpt

C Zhou, Q Li, C Li, J Yu, Y Liu, G Wang… - International Journal of …, 2024 - Springer
Abstract Pretrained Foundation Models (PFMs) are regarded as the foundation for various
downstream tasks across different data modalities. A PFM (eg, BERT, ChatGPT, GPT-4) is …

[HTML][HTML] Pre-trained language models and their applications

H Wang, J Li, H Wu, E Hovy, Y Sun - Engineering, 2023 - Elsevier
Pre-trained language models have achieved striking success in natural language
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …

Dinov2: Learning robust visual features without supervision

M Oquab, T Darcet, T Moutakanni, H Vo… - arxiv preprint arxiv …, 2023 - arxiv.org
The recent breakthroughs in natural language processing for model pretraining on large
quantities of data have opened the way for similar foundation models in computer vision …

Self-supervised learning from images with a joint-embedding predictive architecture

M Assran, Q Duval, I Misra… - Proceedings of the …, 2023 - openaccess.thecvf.com
This paper demonstrates an approach for learning highly semantic image representations
without relying on hand-crafted data-augmentations. We introduce the Image-based Joint …

Cut and learn for unsupervised object detection and instance segmentation

X Wang, R Girdhar, SX Yu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Abstract We propose Cut-and-LEaRn (CutLER), a simple approach for training
unsupervised object detection and segmentation models. We leverage the property of self …

Multimodal foundation models: From specialists to general-purpose assistants

C Li, Z Gan, Z Yang, J Yang, L Li… - … and Trends® in …, 2024 - nowpublishers.com
Neural compression is the application of neural networks and other machine learning
methods to data compression. Recent advances in statistical machine learning have opened …

Transformer-based unsupervised contrastive learning for histopathological image classification

X Wang, S Yang, J Zhang, M Wang, J Zhang… - Medical image …, 2022 - Elsevier
A large-scale and well-annotated dataset is a key factor for the success of deep learning in
medical image analysis. However, assembling such large annotations is very challenging …

Visual prompting via image inpainting

A Bar, Y Gandelsman, T Darrell… - Advances in Neural …, 2022 - proceedings.neurips.cc
How does one adapt a pre-trained visual model to novel downstream tasks without task-
specific finetuning or any model modification? Inspired by prompting in NLP, this paper …

[HTML][HTML] Deep learning, reinforcement learning, and world models

Y Matsuo, Y LeCun, M Sahani, D Precup, D Silver… - Neural Networks, 2022 - Elsevier
Deep learning (DL) and reinforcement learning (RL) methods seem to be a part of
indispensable factors to achieve human-level or super-human AI systems. On the other …

Masked siamese networks for label-efficient learning

M Assran, M Caron, I Misra, P Bojanowski… - … on Computer Vision, 2022 - Springer
Abstract We propose Masked Siamese Networks (MSN), a self-supervised learning
framework for learning image representations. Our approach matches the representation of …