Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
[HTML][HTML] A survey on large language model (llm) security and privacy: The good, the bad, and the ugly
Abstract Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized
natural language understanding and generation. They possess deep language …
natural language understanding and generation. They possess deep language …
A survey on data augmentation for text classification
Data augmentation, the artificial creation of training data for machine learning by
transformations, is a widely studied research field across machine learning disciplines …
transformations, is a widely studied research field across machine learning disciplines …
A large language model for electronic health records
There is an increasing interest in develo** artificial intelligence (AI) systems to process
and interpret electronic health records (EHRs). Natural language processing (NLP) powered …
and interpret electronic health records (EHRs). Natural language processing (NLP) powered …
MiniLLM: Knowledge distillation of large language models
Knowledge Distillation (KD) is a promising technique for reducing the high computational
demand of large language models (LLMs). However, previous KD methods are primarily …
demand of large language models (LLMs). However, previous KD methods are primarily …
On the effectiveness of parameter-efficient fine-tuning
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range
of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always …
of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always …
Surgical fine-tuning improves adaptation to distribution shifts
A common approach to transfer learning under distribution shift is to fine-tune the last few
layers of a pre-trained model, preserving learned features while also adapting to the new …
layers of a pre-trained model, preserving learned features while also adapting to the new …
Fine-tuning can distort pretrained features and underperform out-of-distribution
When transferring a pretrained model to a downstream task, two popular methods are full
fine-tuning (updating all the model parameters) and linear probing (updating only the last …
fine-tuning (updating all the model parameters) and linear probing (updating only the last …
Finetune like you pretrain: Improved finetuning of zero-shot vision models
Finetuning image-text models such as CLIP achieves state-of-the-art accuracies on a variety
of benchmarks. However, recent works (Kumar et al., 2022; Wortsman et al., 2021) have …
of benchmarks. However, recent works (Kumar et al., 2022; Wortsman et al., 2021) have …
Robust fine-tuning of zero-shot models
Large pre-trained models such as CLIP or ALIGN offer consistent accuracy across a range of
data distributions when performing zero-shot inference (ie, without fine-tuning on a specific …
data distributions when performing zero-shot inference (ie, without fine-tuning on a specific …
Selective annotation makes language models better few-shot learners
Many recent approaches to natural language tasks are built on the remarkable abilities of
large language models. Large language models can perform in-context learning, where they …
large language models. Large language models can perform in-context learning, where they …