Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - AI Magazine, 2024 - Wiley Online Library
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …

Fighting disinformation with artificial intelligence: fundamentals, advances and challenges

A Montoro Montarroso, J Cantón-Correa… - 2023 - digibug.ugr.es
Internet and social media have revolutionised the way news is distributed and consumed.
However, the constant flow of massive amounts of content has made it difficult to discern …

A survey on automated fact-checking

Z Guo, M Schlichtkrull, A Vlachos - Transactions of the Association for …, 2022 - direct.mit.edu
Fact-checking has become increasingly important due to the speed with which both
information and misinformation can spread in the modern media ecosystem. Therefore …

Averitec: A dataset for real-world claim verification with evidence from the web

M Schlichtkrull, Z Guo… - Advances in Neural …, 2023 - proceedings.neurips.cc
Existing datasets for automated fact-checking have substantial limitations, such as relying on
artificial claims, lacking annotations for evidence and intermediate reasoning, or including …

Fact-checking complex claims with program-guided reasoning

L Pan, X Wu, X Lu, AT Luu, WY Wang, MY Kan… - arxiv preprint arxiv …, 2023 - arxiv.org
Fact-checking real-world claims often requires collecting multiple pieces of evidence and
applying complex multi-step reasoning. In this paper, we present Program-Guided Fact …

Feverous: Fact extraction and verification over unstructured and structured information

R Aly, Z Guo, M Schlichtkrull, J Thorne… - arxiv preprint arxiv …, 2021 - arxiv.org
Fact verification has attracted a lot of attention in the machine learning and natural language
processing communities, as it is one of the key methods for detecting misinformation …

The state of human-centered NLP technology for fact-checking

A Das, H Liu, V Kovatchev, M Lease - Information processing & …, 2023 - Elsevier
Misinformation threatens modern society by promoting distrust in science, changing
narratives in public health, heightening social polarization, and disrupting democratic …

Language generation models can cause harm: So what can we do about it? an actionable survey

S Kumar, V Balachandran, L Njoo… - arxiv preprint arxiv …, 2022 - arxiv.org
Recent advances in the capacity of large language models to generate human-like text have
resulted in their increased adoption in user-facing settings. In parallel, these improvements …

Fine-grained hallucination detection and editing for language models

A Mishra, A Asai, V Balachandran, Y Wang… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LMs) are prone to generate factual errors, which are often called
hallucinations. In this paper, we introduce a comprehensive taxonomy of hallucinations and …

LongEval: Guidelines for human evaluation of faithfulness in long-form summarization

K Krishna, E Bransom, B Kuehl, M Iyyer… - arxiv preprint arxiv …, 2023 - arxiv.org
While human evaluation remains best practice for accurately judging the faithfulness of
automatically-generated summaries, few solutions exist to address the increased difficulty …