Large legal fictions: Profiling legal hallucinations in large language models

M Dahl, V Magesh, M Suzgun… - Journal of Legal Analysis, 2024 - academic.oup.com
Do large language models (LLMs) know the law? LLMs are increasingly being used to
augment legal practice, education, and research, yet their revolutionary potential is …

Fake News in Sheep's Clothing: Robust Fake News Detection Against LLM-Empowered Style Attacks

J Wu, J Guo, B Hooi - Proceedings of the 30th ACM SIGKDD conference …, 2024 - dl.acm.org
It is commonly perceived that fake news and real news exhibit distinct writing styles, such as
the use of sensationalist versus objective language. However, we emphasize that style …

Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities

M Hosseini, SPJM Horbach, K Holmes… - Quantitative Science …, 2024 - direct.mit.edu
Abstract Technology influences Open Science (OS) practices, because conducting science
in transparent, accessible, and participatory ways requires tools and platforms for …

Let silence speak: Enhancing fake news detection with generated comments from large language models

Q Nan, Q Sheng, J Cao, B Hu, D Wang… - Proceedings of the 33rd …, 2024 - dl.acm.org
Fake news detection plays a crucial role in protecting social media users and maintaining a
healthy news ecosystem. Among existing works, comment-based fake news detection …

Silver lining in the fake news cloud: Can large language models help detect misinformation?

R Kumar, B Goddu, S Saha… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
In the times of advanced generative artificial intelligence, distinguishing truth from fallacy
and deception has become a critical societal challenge. This research attempts to analyze …

Detecting ai-generated text: Factors influencing detectability with current methods

KC Fraser, H Dawkins, S Kiritchenko - arxiv preprint arxiv:2406.15583, 2024 - arxiv.org
Large language models (LLMs) have advanced to a point that even humans have difficulty
discerning whether a text was generated by another human, or by a computer. However …

Dell: Generating reactions and explanations for llm-based misinformation detection

H Wan, S Feng, Z Tan, H Wang, Y Tsvetkov… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models are limited by challenges in factuality and hallucinations to be
directly employed off-the-shelf for judging the veracity of news articles, where factual …

A survey of ai-generated text forensic systems: Detection, attribution, and characterization

T Kumarage, G Agrawal, P Sheth, R Moraffah… - arxiv preprint arxiv …, 2024 - arxiv.org
We have witnessed lately a rapid proliferation of advanced Large Language Models (LLMs)
capable of generating high-quality text. While these LLMs have revolutionized text …

The ethical security of large language models: A systematic review

F Liu, J Jiang, Y Lu, Z Huang, J Jiang - Frontiers of Engineering …, 2025 - Springer
The widespread application of large language models (LLMs) has highlighted new security
challenges and ethical concerns, attracting significant academic and societal attention …

Entropy guided extrapolative decoding to improve factuality in large language models

S Das, L **, L Song, H Mi, B Peng, D Yu - arxiv preprint arxiv:2404.09338, 2024 - arxiv.org
Large language models (LLMs) exhibit impressive natural language capabilities but suffer
from hallucination--generating content ungrounded in the realities of training data. Recent …