Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Proactive conversational agents in the post-chatgpt world
ChatGPT and similar large language model (LLM) based conversational agents have
brought shock waves to the research world. Although astonished by their human-like …
brought shock waves to the research world. Although astonished by their human-like …
Safetyprompts: a systematic review of open datasets for evaluating and improving large language model safety
The last two years have seen a rapid growth in concerns around the safety of large
language models (LLMs). Researchers and practitioners have met these concerns by …
language models (LLMs). Researchers and practitioners have met these concerns by …
Xstest: A test suite for identifying exaggerated safety behaviours in large language models
Without proper safeguards, large language models will readily follow malicious instructions
and generate toxic content. This risk motivates safety efforts such as red-teaming and large …
and generate toxic content. This risk motivates safety efforts such as red-teaming and large …
" I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Widely deployed large language models (LLMs) can produce convincing yet incorrect
outputs, potentially misleading users who may rely on them as if they were correct. To …
outputs, potentially misleading users who may rely on them as if they were correct. To …
Prosocialdialog: A prosocial backbone for conversational agents
Most existing dialogue systems fail to respond properly to potentially unsafe user utterances
by either ignoring or passively agreeing with them. To address this issue, we introduce …
by either ignoring or passively agreeing with them. To address this issue, we introduce …
ROBBIE: Robust bias evaluation of large generative language models
As generative large language models (LLMs) grow more performant and prevalent, we must
develop comprehensive enough tools to measure and improve their fairness. Different …
develop comprehensive enough tools to measure and improve their fairness. Different …
Mirages: On anthropomorphism in dialogue systems
Automated dialogue or conversational systems are anthropomorphised by developers and
personified by users. While a degree of anthropomorphism may be inevitable due to the …
personified by users. While a degree of anthropomorphism may be inevitable due to the …
The ethics of advanced ai assistants
This paper focuses on the opportunities and the ethical and societal risks posed by
advanced AI assistants. We define advanced AI assistants as artificial agents with natural …
advanced AI assistants. We define advanced AI assistants as artificial agents with natural …
Dices dataset: Diversity in conversational ai evaluation for safety
Abstract Machine learning approaches often require training and evaluation datasets with a
clear separation between positive and negative examples. This requirement overly …
clear separation between positive and negative examples. This requirement overly …
Gaining wisdom from setbacks: Aligning large language models via mistake analysis
The rapid development of large language models (LLMs) has not only provided numerous
opportunities but also presented significant challenges. This becomes particularly evident …
opportunities but also presented significant challenges. This becomes particularly evident …