Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Ecomgpt: Instruction-tuning large language models with chain-of-task tasks for e-commerce
Recently, instruction-following Large Language Models (LLMs), represented by ChatGPT,
have exhibited exceptional performance in general Natural Language Processing (NLP) …
have exhibited exceptional performance in general Natural Language Processing (NLP) …
Localizing task information for improved model merging and compression
Model merging and task arithmetic have emerged as promising scalable approaches to
merge multiple single-task checkpoints to one multi-task model, but their applicability is …
merge multiple single-task checkpoints to one multi-task model, but their applicability is …
Learning to route among specialized experts for zero-shot generalization
Recently, there has been a widespread proliferation of" expert" language models that are
specialized to a specific task or domain through parameter-efficient fine-tuning. How can we …
specialized to a specific task or domain through parameter-efficient fine-tuning. How can we …
Active instruction tuning: Improving cross-task generalization by training on prompt sensitive tasks
Instruction tuning (IT) achieves impressive zero-shot generalization results by training large
language models (LLMs) on a massive amount of diverse tasks with instructions. However …
language models (LLMs) on a massive amount of diverse tasks with instructions. However …
What Matters for Model Merging at Scale?
Model merging aims to combine multiple expert models into a more capable single model,
offering benefits such as reduced storage and serving costs, improved generalization, and …
offering benefits such as reduced storage and serving costs, improved generalization, and …
Merging by matching models in task parameter subspaces
Model merging aims to cheaply combine individual task-specific models into a single
multitask model. In this work, we view past merging methods as leveraging different notions …
multitask model. In this work, we view past merging methods as leveraging different notions …
Zero-shot generalization during instruction tuning: insights from similarity and granularity
Understanding alignment techniques begins with comprehending zero-shot generalization
brought by instruction tuning, but little of the mechanism has been understood. Existing work …
brought by instruction tuning, but little of the mechanism has been understood. Existing work …
Lines: Post-training layer scaling prevents forgetting and enhances model merging
Large pre-trained models exhibit impressive zero-shot performance across diverse tasks,
but fine-tuning often leads to catastrophic forgetting, where improvements on a target …
but fine-tuning often leads to catastrophic forgetting, where improvements on a target …
DGTRL: Deep graph transfer reinforcement learning method based on fusion of knowledge and data
G Chen, J Qi, Y Gao, X Zhu, Z Dong, Y Sun - Information Sciences, 2024 - Elsevier
Deep reinforcement learning has shown promising application effects in many fields.
However, issues such as low sample efficiency and weak knowledge transfer and …
However, issues such as low sample efficiency and weak knowledge transfer and …
SparseCL: Sparse Contrastive Learning for Contradiction Retrieval
Contradiction retrieval refers to identifying and extracting documents that explicitly disagree
with or refute the content of a query, which is important to many downstream applications …
with or refute the content of a query, which is important to many downstream applications …