Large language models for data annotation and synthesis: A survey

Z Tan, D Li, S Wang, A Beigi, B Jiang… - arxiv preprint arxiv …, 2024 - arxiv.org
Data annotation and synthesis generally refers to the labeling or generating of raw data with
relevant information, which could be used for improving the efficacy of machine learning …

Counterfactual debating with preset stances for hallucination elimination of llms

Y Fang, M Li, W Wang, H Lin, F Feng - arxiv preprint arxiv:2406.11514, 2024 - arxiv.org
Large Language Models (LLMs) excel in various natural language processing tasks but
struggle with hallucination issues. Existing solutions have considered utilizing LLMs' …

Did you tell a deadly lie? evaluating large language models for health misinformation identification

S Thapa, K Rauniyar, H Veeramani, A Shah… - … Conference on Web …, 2024 - Springer
The rapid spread of health misinformation online poses significant challenges to public
health, potentially leading to confusion, undermining trust in health authorities, and …

[PDF][PDF] SINAI participation in SimpleText task 2 at CLEF 2024: zero-shot prompting on GPT-4-turbo for lexical complexity prediction

J Ortiz-Zambrano, C Espin-Riofrio… - Working Notes of the …, 2024 - ceur-ws.org
In this article, we present our participation in Tasks 2.1 and 2.2 of the SimpleText track of
CLEF 2024. Our work focused on the implementation of zero-shot learning using the GPT-4 …

Estimating Causal Effects of Text Interventions Leveraging LLMs

S Guo, MG Marmarelis, F Morstatter… - arxiv preprint arxiv …, 2024 - arxiv.org
Quantifying the effect of textual interventions in social systems, such as reducing anger in
social media posts to see its impact on engagement, poses significant challenges. Direct …

FitCF: A Framework for Automatic Feature Importance-guided Counterfactual Example Generation

Q Wang, N Feldhus, S Ostermann… - arxiv preprint arxiv …, 2025 - arxiv.org
Counterfactual examples are widely used in natural language processing (NLP) as valuable
data to improve models, and in explainable artificial intelligence (XAI) to understand model …

Interpreting Language Reward Models via Contrastive Explanations

J Jiang, T Bewley, S Mishra, F Lecue… - arxiv preprint arxiv …, 2024 - arxiv.org
Reward models (RMs) are a crucial component in the alignment of large language
models'(LLMs) outputs with human values. RMs approximate human preferences over …

SCENE: Evaluating Explainable AI Techniques Using Soft Counterfactuals

H Zheng, U Pamuksuz - arxiv preprint arxiv:2408.04575, 2024 - arxiv.org
Explainable Artificial Intelligence (XAI) plays a crucial role in enhancing the transparency
and accountability of AI models, particularly in natural language processing (NLP) tasks …

Large language models and causal analysis: zero-shot counterfactuals in hate speech perception

S Hernández Jiménez - 2024 - diposit.ub.edu
[en] Detecting hate speech is crucial for maintaining the integrity of social media platforms,
as it involves identifying content that denigrates individuals or groups based on their …