Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models
Z Lin, S Guan, W Zhang, H Zhang, Y Li… - Artificial Intelligence …, 2024 - Springer
Recently, large language models (LLMs) have attracted considerable attention due to their
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …
Siren's song in the AI ocean: a survey on hallucination in large language models
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …
range of downstream tasks, a significant concern revolves around their propensity to exhibit …
ChatGPT as research scientist: probing GPT's capabilities as a research librarian, research ethicist, data generator, and data predictor
How good a research scientist is ChatGPT? We systematically probed the capabilities of
GPT-3.5 and GPT-4 across four central components of the scientific process: as a Research …
GPT-3.5 and GPT-4 across four central components of the scientific process: as a Research …
The dawn after the dark: An empirical study on factuality hallucination in large language models
In the era of large language models (LLMs), hallucination (ie, the tendency to generate
factually incorrect content) poses great challenge to trustworthy and reliable deployment of …
factually incorrect content) poses great challenge to trustworthy and reliable deployment of …
Fine-tune language models to approximate unbiased in-context learning
In-context learning (ICL) is an astonishing emergent ability of large language models
(LLMs). By presenting a prompt that includes multiple input-output pairs as examples and …
(LLMs). By presenting a prompt that includes multiple input-output pairs as examples and …
Silver lining in the fake news cloud: Can large language models help detect misinformation?
In the times of advanced generative artificial intelligence, distinguishing truth from fallacy
and deception has become a critical societal challenge. This research attempts to analyze …
and deception has become a critical societal challenge. This research attempts to analyze …
When XGBoost outperforms GPT-4 on text classification: A case study
Large language models (LLMs) are increasingly used for applications beyond text
generation, ranging from text summarization to instruction following. One popular example of …
generation, ranging from text summarization to instruction following. One popular example of …
Journey of Hallucination-minimized Generative AI Solutions for Financial Decision Makers
S Roychowdhury - Proceedings of the 17th ACM International …, 2024 - dl.acm.org
Generative AI has significantly reduced the entry barrier to the domain of AI owing to the
ease of use and core capabilities of automation, translation, and intelligent actions in our …
ease of use and core capabilities of automation, translation, and intelligent actions in our …
Behind the scenes: A critical perspective on genAI and open educational practices
Artificial Intelligence (AI) is a rapidly evolving field that is influencing every aspect of life.
Generative AI (GenAI) as a sub-branch of AI is used to create content in various formats such …
Generative AI (GenAI) as a sub-branch of AI is used to create content in various formats such …
HalluSafe at SemEval-2024 task 6: An NLI-based approach to make LLMs safer by better detecting hallucinations and overgeneration mistakes
The advancement of large language models (LLMs), their ability to produce eloquent and
fluent content, and their vast knowledge have resulted in their usage in various tasks and …
fluent content, and their vast knowledge have resulted in their usage in various tasks and …