Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Tool learning with foundation models
Humans possess an extraordinary ability to create and utilize tools. With the advent of
foundation models, artificial intelligence systems have the potential to be equally adept in …
foundation models, artificial intelligence systems have the potential to be equally adept in …
Survey on factuality in large language models: Knowledge, retrieval and domain-specificity
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …
A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions
The emergence of large language models (LLMs) has marked a significant breakthrough in
natural language processing (NLP), fueling a paradigm shift in information acquisition …
natural language processing (NLP), fueling a paradigm shift in information acquisition …
Fine-tuning aligned language models compromises safety, even when users do not intend to!
Optimizing large language models (LLMs) for downstream use cases often involves the
customization of pre-trained LLMs through further fine-tuning. Meta's open release of Llama …
customization of pre-trained LLMs through further fine-tuning. Meta's open release of Llama …
The flan collection: Designing data and methods for effective instruction tuning
We study the design decision of publicly available instruction tuning methods, by
reproducing and breaking down the development of Flan 2022 (Chung et al., 2022) …
reproducing and breaking down the development of Flan 2022 (Chung et al., 2022) …
Trustworthy llms: a survey and guideline for evaluating large language models' alignment
Ensuring alignment, which refers to making models behave in accordance with human
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …
When not to trust language models: Investigating effectiveness of parametric and non-parametric memories
Despite their impressive performance on diverse tasks, large language models (LMs) still
struggle with tasks requiring rich world knowledge, implying the limitations of relying solely …
struggle with tasks requiring rich world knowledge, implying the limitations of relying solely …
A pretrainer's guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity
Pretraining data design is critically under-documented and often guided by empirically
unsupported intuitions. We pretrain models on data curated (1) at different collection …
unsupported intuitions. We pretrain models on data curated (1) at different collection …
Rarr: Researching and revising what language models say, using language models
Language models (LMs) now excel at many tasks such as few-shot learning, question
answering, reasoning, and dialog. However, they sometimes generate unsupported or …
answering, reasoning, and dialog. However, they sometimes generate unsupported or …
Trusting your evidence: Hallucinate less with context-aware decoding
Abstract Language models (LMs) often struggle to pay enough attention to the input context,
and generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we …
and generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we …