Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Crosslingual generalization through multitask finetuning
Multitask prompted finetuning (MTF) has been shown to help large language models
generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused …
generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused …
Pretraining language models with human preferences
Abstract Language models (LMs) are pretrained to imitate text from large and diverse
datasets that contain content that would violate human preferences if generated by an LM …
datasets that contain content that would violate human preferences if generated by an LM …
Modular deep learning
Transfer learning has recently become the dominant paradigm of machine learning. Pre-
trained models fine-tuned for downstream tasks achieve better performance with fewer …
trained models fine-tuned for downstream tasks achieve better performance with fewer …
Lora learns less and forgets less
Low-Rank Adaptation (LoRA) is a widely-used parameter-efficient finetuning method for
large language models. LoRA saves memory by training only low rank perturbations to …
large language models. LoRA saves memory by training only low rank perturbations to …
Conditional adapters: Parameter-efficient transfer learning with fast inference
Abstract We propose Conditional Adapter (CoDA), a parameter-efficient transfer learning
method that also improves inference efficiency. CoDA generalizes beyond standard adapter …
method that also improves inference efficiency. CoDA generalizes beyond standard adapter …
Multilingual large language model: A survey of resources, taxonomy and frontiers
Multilingual Large Language Models are capable of using powerful Large Language
Models to handle and respond to queries in multiple languages, which achieves remarkable …
Models to handle and respond to queries in multiple languages, which achieves remarkable …
Slm: Bridge the thin gap between speech and text foundation models
We present a joint Speech and Language Model (SLM), a multitask, multilingual, and dual-
modal model that takes advantage of pretrained foundational speech and language models …
modal model that takes advantage of pretrained foundational speech and language models …
Understanding and mitigating language confusion in llms
We investigate a surprising limitation of LLMs: their inability to consistently generate text in a
user's desired language. We create the Language Confusion Benchmark (LCB) to evaluate …
user's desired language. We create the Language Confusion Benchmark (LCB) to evaluate …
PrivacyMind: large language models can be contextual privacy protection learners
The proliferation of Large Language Models (LLMs) has driven considerable interest in fine-
tuning them with domain-specific data to create specialized language models. Nevertheless …
tuning them with domain-specific data to create specialized language models. Nevertheless …
QAmeleon: Multilingual QA with Only 5 Examples
The availability of large, high-quality datasets has been a major driver of recent progress in
question answering (QA). Such annotated datasets, however, are difficult and costly to …
question answering (QA). Such annotated datasets, however, are difficult and costly to …