Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Advances, challenges and opportunities in creating data for trustworthy AI
As artificial intelligence (AI) transitions from research to deployment, creating the appropriate
datasets and data pipelines to develop and evaluate AI models is increasingly the biggest …
datasets and data pipelines to develop and evaluate AI models is increasingly the biggest …
Survey of hallucination in natural language generation
Natural Language Generation (NLG) has improved exponentially in recent years thanks to
the development of sequence-to-sequence deep learning technologies such as Transformer …
the development of sequence-to-sequence deep learning technologies such as Transformer …
Qlora: Efficient finetuning of quantized llms
We present QLoRA, an efficient finetuning approach that reduces memory usage enough to
finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit …
finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit …
Datacomp: In search of the next generation of multimodal datasets
Multimodal datasets are a critical component in recent breakthroughs such as CLIP, Stable
Diffusion and GPT-4, yet their design does not receive the same research attention as model …
Diffusion and GPT-4, yet their design does not receive the same research attention as model …
Should chatgpt be biased? challenges and risks of bias in large language models
As the capabilities of generative language models continue to advance, the implications of
biases ingrained within these models have garnered increasing attention from researchers …
biases ingrained within these models have garnered increasing attention from researchers …
Large language models are not fair evaluators
In this paper, we uncover a systematic bias in the evaluation paradigm of adopting large
language models~(LLMs), eg, GPT-4, as a referee to score and compare the quality of …
language models~(LLMs), eg, GPT-4, as a referee to score and compare the quality of …
Gpt-ner: Named entity recognition via large language models
Despite the fact that large-scale Language Models (LLM) have achieved SOTA
performances on a variety of NLP tasks, its performance on NER is still significantly below …
performances on a variety of NLP tasks, its performance on NER is still significantly below …
[PDF][PDF] Trustllm: Trustworthiness in large language models
Large language models (LLMs), exemplified by ChatGPT, have gained considerable
attention for their excellent natural language processing capabilities. Nonetheless, these …
attention for their excellent natural language processing capabilities. Nonetheless, these …
Unnatural instructions: Tuning language models with (almost) no human labor
Instruction tuning enables pretrained language models to perform new tasks from inference-
time natural language descriptions. These approaches rely on vast amounts of human …
time natural language descriptions. These approaches rely on vast amounts of human …
The debate over understanding in AI's large language models
We survey a current, heated debate in the artificial intelligence (AI) research community on
whether large pretrained language models can be said to understand language—and the …
whether large pretrained language models can be said to understand language—and the …