Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Unleashing the power of data tsunami: A comprehensive survey on data assessment and selection for instruction tuning of language models
Instruction tuning plays a critical role in aligning large language models (LLMs) with human
preference. Despite the vast amount of open instruction datasets, naively training a LLM on …
preference. Despite the vast amount of open instruction datasets, naively training a LLM on …
Instruction following without instruction tuning
Instruction tuning commonly means finetuning a language model on instruction-response
pairs. We discover two forms of adaptation (tuning) that are deficient compared to instruction …
pairs. We discover two forms of adaptation (tuning) that are deficient compared to instruction …
Codebook llms: Adapting political science codebooks for llm use and adapting llms to follow codebooks
A Halterman, KA Keith - arxiv preprint arxiv:2407.10747, 2024 - arxiv.org
Codebooks--documents that operationalize constructs and outline annotation procedures--
are used almost universally by social scientists when coding unstructured political texts …
are used almost universally by social scientists when coding unstructured political texts …
A Closer Look at Machine Unlearning for Large Language Models
Large language models (LLMs) may memorize sensitive or copyrighted content, raising
privacy and legal concerns. Due to the high cost of retraining from scratch, researchers …
privacy and legal concerns. Due to the high cost of retraining from scratch, researchers …
Lions: An empirically optimized approach to align language models
Alignment is a crucial step to enhance the instruction-following and conversational abilities
of language models. Despite many recent work proposing new algorithms, datasets, and …
of language models. Despite many recent work proposing new algorithms, datasets, and …
Understanding likelihood over-optimisation in direct alignment algorithms
Direct Alignment Algorithms (DAAs), such as Direct Preference Optimisation (DPO) and
Identity Preference Optimisation (IPO), have emerged as alternatives to online …
Identity Preference Optimisation (IPO), have emerged as alternatives to online …
Understanding the Role of User Profile in the Personalization of Large Language Models
Utilizing user profiles to personalize Large Language Models (LLMs) has been shown to
enhance the performance on a wide range of tasks. However, the precise role of user …
enhance the performance on a wide range of tasks. However, the precise role of user …
MDCure: A Scalable Pipeline for Multi-Document Instruction-Following
Multi-document (MD) processing is crucial for LLMs to handle real-world tasks such as
summarization and question-answering across large sets of documents. While LLMs have …
summarization and question-answering across large sets of documents. While LLMs have …
SFTMix: Elevating Language Model Instruction Tuning with Mixup Recipe
Y **ao, S Zhang, W Zhou, M Ghassemi… - arxiv preprint arxiv …, 2024 - arxiv.org
To induce desired behaviors in large language models (LLMs) for interaction-driven tasks,
the instruction-tuning stage typically trains LLMs on instruction-response pairs using the next …
the instruction-tuning stage typically trains LLMs on instruction-response pairs using the next …
All-in-One Tuning and Structural Pruning for Domain-Specific LLMs
Existing pruning techniques for large language models (LLMs) targeting domain-specific
applications typically follow a two-stage process: pruning the pretrained general-purpose …
applications typically follow a two-stage process: pruning the pretrained general-purpose …