Security and privacy challenges of large language models: A survey

BC Das, MH Amini, Y Wu - ACM Computing Surveys, 2025 - dl.acm.org
Large language models (LLMs) have demonstrated extraordinary capabilities and
contributed to multiple fields, such as generating and summarizing text, language …

A comprehensive survey on pretrained foundation models: A history from bert to chatgpt

C Zhou, Q Li, C Li, J Yu, Y Liu, G Wang… - International Journal of …, 2024 - Springer
Abstract Pretrained Foundation Models (PFMs) are regarded as the foundation for various
downstream tasks across different data modalities. A PFM (eg, BERT, ChatGPT, GPT-4) is …

[PDF][PDF] A survey of large language models

WX Zhao, K Zhou, J Li, T Tang… - arxiv preprint arxiv …, 2023 - paper-notes.zhjwpku.com
Ever since the Turing Test was proposed in the 1950s, humans have explored the mastering
of language intelligence by machine. Language is essentially a complex, intricate system of …

Parameter-efficient fine-tuning of large-scale pre-trained language models

N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su… - Nature Machine …, 2023 - nature.com
With the prevalence of pre-trained language models (PLMs) and the pre-training–fine-tuning
paradigm, it has been continuously shown that larger models tend to yield better …

Tool learning with foundation models

Y Qin, S Hu, Y Lin, W Chen, N Ding, G Cui… - ACM Computing …, 2024 - dl.acm.org
Humans possess an extraordinary ability to create and utilize tools. With the advent of
foundation models, artificial intelligence systems have the potential to be equally adept in …

[HTML][HTML] A survey of GPT-3 family large language models including ChatGPT and GPT-4

KS Kalyan - Natural Language Processing Journal, 2024 - Elsevier
Large language models (LLMs) are a special class of pretrained language models (PLMs)
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …

MiniLLM: Knowledge distillation of large language models

Y Gu, L Dong, F Wei, M Huang - arxiv preprint arxiv:2306.08543, 2023 - arxiv.org
Knowledge Distillation (KD) is a promising technique for reducing the high computational
demand of large language models (LLMs). However, previous KD methods are primarily …

Enhancing chat language models by scaling high-quality instructional conversations

N Ding, Y Chen, B Xu, Y Qin, Z Zheng, S Hu… - arxiv preprint arxiv …, 2023 - arxiv.org
Fine-tuning on instruction data has been widely validated as an effective practice for
implementing chat language models like ChatGPT. Scaling the diversity and quality of such …

Recommendation as instruction following: A large language model empowered recommendation approach

J Zhang, R **e, Y Hou, X Zhao, L Lin… - ACM Transactions on …, 2023 - dl.acm.org
In the past decades, recommender systems have attracted much attention in both research
and industry communities. Existing recommendation models mainly learn the underlying …

[HTML][HTML] AI literacy and its implications for prompt engineering strategies

N Knoth, A Tolzin, A Janson, JM Leimeister - Computers and Education …, 2024 - Elsevier
Artificial intelligence technologies are rapidly advancing. As part of this development, large
language models (LLMs) are increasingly being used when humans interact with systems …