Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Security and privacy challenges of large language models: A survey
Large language models (LLMs) have demonstrated extraordinary capabilities and
contributed to multiple fields, such as generating and summarizing text, language …
contributed to multiple fields, such as generating and summarizing text, language …
A comprehensive survey on pretrained foundation models: A history from bert to chatgpt
Abstract Pretrained Foundation Models (PFMs) are regarded as the foundation for various
downstream tasks across different data modalities. A PFM (eg, BERT, ChatGPT, GPT-4) is …
downstream tasks across different data modalities. A PFM (eg, BERT, ChatGPT, GPT-4) is …
[PDF][PDF] A survey of large language models
Ever since the Turing Test was proposed in the 1950s, humans have explored the mastering
of language intelligence by machine. Language is essentially a complex, intricate system of …
of language intelligence by machine. Language is essentially a complex, intricate system of …
Parameter-efficient fine-tuning of large-scale pre-trained language models
With the prevalence of pre-trained language models (PLMs) and the pre-training–fine-tuning
paradigm, it has been continuously shown that larger models tend to yield better …
paradigm, it has been continuously shown that larger models tend to yield better …
Tool learning with foundation models
Humans possess an extraordinary ability to create and utilize tools. With the advent of
foundation models, artificial intelligence systems have the potential to be equally adept in …
foundation models, artificial intelligence systems have the potential to be equally adept in …
[HTML][HTML] A survey of GPT-3 family large language models including ChatGPT and GPT-4
KS Kalyan - Natural Language Processing Journal, 2024 - Elsevier
Large language models (LLMs) are a special class of pretrained language models (PLMs)
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
MiniLLM: Knowledge distillation of large language models
Knowledge Distillation (KD) is a promising technique for reducing the high computational
demand of large language models (LLMs). However, previous KD methods are primarily …
demand of large language models (LLMs). However, previous KD methods are primarily …
Enhancing chat language models by scaling high-quality instructional conversations
Fine-tuning on instruction data has been widely validated as an effective practice for
implementing chat language models like ChatGPT. Scaling the diversity and quality of such …
implementing chat language models like ChatGPT. Scaling the diversity and quality of such …
Recommendation as instruction following: A large language model empowered recommendation approach
In the past decades, recommender systems have attracted much attention in both research
and industry communities. Existing recommendation models mainly learn the underlying …
and industry communities. Existing recommendation models mainly learn the underlying …
[HTML][HTML] AI literacy and its implications for prompt engineering strategies
Artificial intelligence technologies are rapidly advancing. As part of this development, large
language models (LLMs) are increasingly being used when humans interact with systems …
language models (LLMs) are increasingly being used when humans interact with systems …