A comprehensive survey on pretrained foundation models: A history from bert to chatgpt
Abstract Pretrained Foundation Models (PFMs) are regarded as the foundation for various
downstream tasks across different data modalities. A PFM (eg, BERT, ChatGPT, GPT-4) is …
downstream tasks across different data modalities. A PFM (eg, BERT, ChatGPT, GPT-4) is …
Transfer learning for medical image classification: a literature review
HE Kim, A Cosa-Linan, N Santhanam, M Jannesari… - BMC medical …, 2022 - Springer
Background Transfer learning (TL) with convolutional neural networks aims to improve
performances on a new task by leveraging the knowledge of similar tasks learned in …
performances on a new task by leveraging the knowledge of similar tasks learned in …
Visual prompt tuning
The current modus operandi in adapting pre-trained models involves updating all the
backbone parameters, ie., full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) …
backbone parameters, ie., full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) …
[HTML][HTML] Deep learning in food category recognition
Integrating artificial intelligence with food category recognition has been a field of interest for
research for the past few decades. It is potentially one of the next steps in revolutionizing …
research for the past few decades. It is potentially one of the next steps in revolutionizing …
[HTML][HTML] A survey of GPT-3 family large language models including ChatGPT and GPT-4
KS Kalyan - Natural Language Processing Journal, 2024 - Elsevier
Large language models (LLMs) are a special class of pretrained language models (PLMs)
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
Generalizing to unseen domains: A survey on domain generalization
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …
the same. To this end, a key requirement is to develop models that can generalize to unseen …
A perspective survey on deep transfer learning for fault diagnosis in industrial scenarios: Theories, applications and challenges
Abstract Deep Transfer Learning (DTL) is a new paradigm of machine learning, which can
not only leverage the advantages of Deep Learning (DL) in feature representation, but also …
not only leverage the advantages of Deep Learning (DL) in feature representation, but also …
Trustllm: Trustworthiness in large language models
Large language models (LLMs), exemplified by ChatGPT, have gained considerable
attention for their excellent natural language processing capabilities. Nonetheless, these …
attention for their excellent natural language processing capabilities. Nonetheless, these …
Human-in-the-loop machine learning: a state of the art
E Mosqueira-Rey, E Hernández-Pereira… - Artificial Intelligence …, 2023 - Springer
Researchers are defining new types of interactions between humans and machine learning
algorithms generically called human-in-the-loop machine learning. Depending on who is in …
algorithms generically called human-in-the-loop machine learning. Depending on who is in …
Sim-to-real transfer in deep reinforcement learning for robotics: a survey
W Zhao, JP Queralta… - 2020 IEEE symposium …, 2020 - ieeexplore.ieee.org
Deep reinforcement learning has recently seen huge success across multiple areas in the
robotics domain. Owing to the limitations of gathering real-world data, ie, sample inefficiency …
robotics domain. Owing to the limitations of gathering real-world data, ie, sample inefficiency …