Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Combating misinformation in the age of llms: Opportunities and challenges
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …
and public trust. The emergence of large language models (LLMs) has great potential to …
When Can LLMs Actually Correct Their Own Mistakes? A Critical Survey of Self-Correction of LLMs
Self-correction is an approach to improving responses from large language models (LLMs)
by refining the responses using LLMs during inference. Prior work has proposed various self …
by refining the responses using LLMs during inference. Prior work has proposed various self …
Gpt-4 technical report
We report the development of GPT-4, a large-scale, multimodal model which can accept
image and text inputs and produce text outputs. While less capable than humans in many …
image and text inputs and produce text outputs. While less capable than humans in many …
Beavertails: Towards improved safety alignment of llm via a human-preference dataset
In this paper, we introduce the BeaverTails dataset, aimed at fostering research on safety
alignment in large language models (LLMs). This dataset uniquely separates annotations of …
alignment in large language models (LLMs). This dataset uniquely separates annotations of …
Holistic evaluation of language models
Language models (LMs) are becoming the foundation for almost all major language
technologies, but their capabilities, limitations, and risks are not well understood. We present …
technologies, but their capabilities, limitations, and risks are not well understood. We present …
Safe rlhf: Safe reinforcement learning from human feedback
With the development of large language models (LLMs), striking a balance between the
performance and safety of AI systems has never been more critical. However, the inherent …
performance and safety of AI systems has never been more critical. However, the inherent …
Can llm-generated misinformation be detected?
The advent of Large Language Models (LLMs) has made a transformative impact. However,
the potential that LLMs such as ChatGPT can be exploited to generate misinformation has …
the potential that LLMs such as ChatGPT can be exploited to generate misinformation has …
Auditing large language models: a three-layered approach
Large language models (LLMs) represent a major advance in artificial intelligence (AI)
research. However, the widespread use of LLMs is also coupled with significant ethical and …
research. However, the widespread use of LLMs is also coupled with significant ethical and …
The capacity for moral self-correction in large language models
We test the hypothesis that language models trained with reinforcement learning from
human feedback (RLHF) have the capability to" morally self-correct"--to avoid producing …
human feedback (RLHF) have the capability to" morally self-correct"--to avoid producing …
Evaluating the social impact of generative ai systems in systems and society
Generative AI systems across modalities, ranging from text (including code), image, audio,
and video, have broad social impacts, but there is no official standard for means of …
and video, have broad social impacts, but there is no official standard for means of …