[HTML][HTML] A survey on large language model (llm) security and privacy: The good, the bad, and the ugly

Y Yao, J Duan, K Xu, Y Cai, Z Sun, Y Zhang - High-Confidence Computing, 2024 - Elsevier
Abstract Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized
natural language understanding and generation. They possess deep language …

A survey on data augmentation for text classification

M Bayer, MA Kaufhold, C Reuter - ACM Computing Surveys, 2022 - dl.acm.org
Data augmentation, the artificial creation of training data for machine learning by
transformations, is a widely studied research field across machine learning disciplines …

A large language model for electronic health records

X Yang, A Chen, N PourNejatian, HC Shin… - NPJ digital …, 2022 - nature.com
There is an increasing interest in develo** artificial intelligence (AI) systems to process
and interpret electronic health records (EHRs). Natural language processing (NLP) powered …

MiniLLM: Knowledge distillation of large language models

Y Gu, L Dong, F Wei, M Huang - arxiv preprint arxiv:2306.08543, 2023 - arxiv.org
Knowledge Distillation (KD) is a promising technique for reducing the high computational
demand of large language models (LLMs). However, previous KD methods are primarily …

On the effectiveness of parameter-efficient fine-tuning

Z Fu, H Yang, AMC So, W Lam, L Bing… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range
of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always …

Surgical fine-tuning improves adaptation to distribution shifts

Y Lee, AS Chen, F Tajwar, A Kumar, H Yao… - arxiv preprint arxiv …, 2022 - arxiv.org
A common approach to transfer learning under distribution shift is to fine-tune the last few
layers of a pre-trained model, preserving learned features while also adapting to the new …

Fine-tuning can distort pretrained features and underperform out-of-distribution

A Kumar, A Raghunathan, R Jones, T Ma… - arxiv preprint arxiv …, 2022 - arxiv.org
When transferring a pretrained model to a downstream task, two popular methods are full
fine-tuning (updating all the model parameters) and linear probing (updating only the last …

Finetune like you pretrain: Improved finetuning of zero-shot vision models

S Goyal, A Kumar, S Garg, Z Kolter… - Proceedings of the …, 2023 - openaccess.thecvf.com
Finetuning image-text models such as CLIP achieves state-of-the-art accuracies on a variety
of benchmarks. However, recent works (Kumar et al., 2022; Wortsman et al., 2021) have …

Robust fine-tuning of zero-shot models

M Wortsman, G Ilharco, JW Kim, M Li… - Proceedings of the …, 2022 - openaccess.thecvf.com
Large pre-trained models such as CLIP or ALIGN offer consistent accuracy across a range of
data distributions when performing zero-shot inference (ie, without fine-tuning on a specific …

Selective annotation makes language models better few-shot learners

H Su, J Kasai, CH Wu, W Shi, T Wang, J **n… - arxiv preprint arxiv …, 2022 - arxiv.org
Many recent approaches to natural language tasks are built on the remarkable abilities of
large language models. Large language models can perform in-context learning, where they …