Should chatgpt be biased? challenges and risks of bias in large language models
E Ferrara - arxiv preprint arxiv:2304.03738, 2023 - arxiv.org
As the capabilities of generative language models continue to advance, the implications of
biases ingrained within these models have garnered increasing attention from researchers …
biases ingrained within these models have garnered increasing attention from researchers …
Trustllm: Trustworthiness in large language models
Large language models (LLMs), exemplified by ChatGPT, have gained considerable
attention for their excellent natural language processing capabilities. Nonetheless, these …
attention for their excellent natural language processing capabilities. Nonetheless, these …
[HTML][HTML] The butterfly effect in artificial intelligence systems: Implications for AI bias and fairness
E Ferrara - Machine Learning with Applications, 2024 - Elsevier
The concept of the Butterfly Effect, derived from chaos theory, highlights how seemingly
minor changes can lead to significant, unpredictable outcomes in complex systems. This …
minor changes can lead to significant, unpredictable outcomes in complex systems. This …
[HTML][HTML] Position: TrustLLM: Trustworthiness in large language models
Large language models (LLMs) have gained considerable attention for their excellent
natural language processing capabilities. Nonetheless, these LLMs present many …
natural language processing capabilities. Nonetheless, these LLMs present many …
Foundational challenges in assuring alignment and safety of large language models
This work identifies 18 foundational challenges in assuring the alignment and safety of large
language models (LLMs). These challenges are organized into three different categories …
language models (LLMs). These challenges are organized into three different categories …
Striking the balance in using LLMs for fact-checking: A narrative literature review
The launch of ChatGPT at the end of November 2022 triggered a general reflection on its
benefits for supporting fact-checking workflows and practices. Between the excitement of the …
benefits for supporting fact-checking workflows and practices. Between the excitement of the …
Expertqa: Expert-curated questions and attributed answers
As language models are adapted by a more sophisticated and diverse set of users, the
importance of guaranteeing that they provide factually correct information supported by …
importance of guaranteeing that they provide factually correct information supported by …
Factcheck-bench: Fine-grained evaluation benchmark for automatic fact-checkers
The increased use of large language models (LLMs) across a variety of real-world
applications calls for mechanisms to verify the factual accuracy of their outputs. In this work …
applications calls for mechanisms to verify the factual accuracy of their outputs. In this work …
[PDF][PDF] Enhancing contextual understanding of mistral llm with external knowledge bases
M Sasaki, N Watanabe, T Komanaka - 2024 - assets-eu.researchsquare.com
This study explores the enhancement of contextual understanding and factual accuracy in
Language Learning Models (LLMs), specifically Mistral LLM, through the integration of …
Language Learning Models (LLMs), specifically Mistral LLM, through the integration of …
Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output
The increased use of large language models (LLMs) across a variety of real-world
applications calls for mechanisms to verify the factual accuracy of their outputs. In this work …
applications calls for mechanisms to verify the factual accuracy of their outputs. In this work …