Literature Review of AI Hallucination Research Since the Advent of ChatGPT: Focusing on Papers from arxiv

DM Park, HJ Lee - Informatization Policy, 2024 - koreascience.kr
Hallucination is a significant barrier to the utilization of large-scale language models or
multimodal models. In this study, we collected 654 computer science papers with" …

[PDF][PDF] 챗 GPT 등장 이후 인공지능 환각 연구의 문헌 검토: 아카이브 (arxiv) 의 논문을 중심으로

박대민, 이한종 - 정보화정책, 2024 - raw.githubusercontent.com
Hallucination is a significant barrier to the utilization of large-scale language models or
multimodal models. In this study, we collected 654 computer science papers with …

Constraints on AI: Arab Journalists' experiences and perceptions of governmental restrictions on ChatGPT

MS AlAshry, W Al-Saqaf - Journal of Information Technology & …, 2024 - Taylor & Francis
This study investigates the impact of Arab governmental restrictions on journalists' use of
ChatGPT, a leading Generative AI chatbot. Through interviews with 30 journalists from Syria …

Dependency-Aware Semi-Structured Sparsity of GLU Variants in Large Language Models

Z Guo, H Kamigaito, T Wanatnabe - arxiv preprint arxiv:2405.01943, 2024 - arxiv.org
The rapid advancement in Large Language Models (LLMs) has markedly enhanced the
capabilities of language understanding and generation. However, the substantial model size …

Investigating the Learning Dynamics of Conditional Language Models

G Gavrilas - 2024 - research-collection.ethz.ch
Conditional language models estimate a conditional probability over natural language
strings. The learning dynamics of these models are still not entirely understood. The goal of …