Confabulation: The surprising value of large language model hallucinations

P Sui, E Duede, S Wu, RJ So - arxiv preprint arxiv:2406.04175, 2024 - arxiv.org
This paper presents a systematic defense of large language model (LLM) hallucinations
or'confabulations' as a potential resource instead of a categorically negative pitfall. The …

Reliability in machine learning

T Grote, K Genin, E Sullivan - Philosophy Compass, 2024 - Wiley Online Library
Issues of reliability are claiming center‐stage in the epistemology of machine learning. This
paper unifies different branches in the literature and points to promising research directions …

The case for generative AI in scholarly practice

C Berg - Available at SSRN 4407587, 2023 - papers.ssrn.com
This paper defends the use of generative artificial intelligence (AI) in scholarship and argues
for its legitimacy as a valuable tool for contemporary research practice. It uses a emergent …

Roadmap on machine learning glassy dynamics

G Jung, RM Alkemade, V Bapst, D Coslovich… - arxiv preprint arxiv …, 2023 - arxiv.org
Unraveling the connections between microscopic structure, emergent physical properties,
and slow dynamics has long been a challenge when studying the glass transition. The …

Instruments, agents, and artificial intelligence: novel epistemic categories of reliability

E Duede - Synthese, 2022 - Springer
Deep learning (DL) has become increasingly central to science, primarily due to its capacity
to quickly, efficiently, and accurately predict and classify phenomena of scientific interest …

Exploración filosófica de la epistemología de la inteligencia artificial: Una revisión sistemática

D Román-Acosta - Revista Uniandes Episteme, 2024 - revista.uniandes.edu.ec
Este trabajo exploró la intersección filosófica de la inteligencia artificial mediante una
revisión sistemática que abordó la epistemología y la autenticidad de la comprensión de las …

SIDEs: Separating Idealization from Deceptive'Explanations' in xAI

E Sullivan - Proceedings of the 2024 ACM Conference on Fairness …, 2024 - dl.acm.org
Explainable AI (xAI) methods are important for establishing trust in using black-box models.
However, recent criticism has mounted against current xAI methods that they disagree, are …

We have no satisfactory social epistemology of AI-based science

I Koskinen - Social Epistemology, 2024 - Taylor & Francis
In the social epistemology of scientific knowledge, it is largely accepted that relationships of
trust, not just reliance, are necessary in contemporary collaborative science characterised by …

Trust, explainability and AI

S Baron - Philosophy & Technology, 2025 - Springer
There has been a surge of interest in explainable artificial intelligence (XAI). It is commonly
claimed that explainability is necessary for trust in AI, and that this is why we need it. In this …

Roadmap on machine learning glassy dynamics

G Jung, RM Alkemade, V Bapst, D Coslovich… - Nature Reviews …, 2025 - nature.com
Unravelling the connections between microscopic structure, emergent physical properties
and slow dynamics has long been a challenge when studying the glass transition. The …