Survey on factuality in large language models: Knowledge, retrieval and domain-specificity

C Wang, X Liu, Y Yue, X Tang, T Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …

Large language model as attributed training data generator: A tale of diversity and bias

Y Yu, Y Zhuang, J Zhang, Y Meng… - Advances in …, 2024 - proceedings.neurips.cc
Large language models (LLMs) have been recently leveraged as training data generators
for various natural language processing (NLP) tasks. While previous research has explored …

Medagents: Large language models as collaborators for zero-shot medical reasoning

X Tang, A Zou, Z Zhang, Z Li, Y Zhao, X Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models (LLMs), despite their remarkable progress across various general
domains, encounter significant barriers in medicine and healthcare. This field faces unique …

Adapting large language models via reading comprehension

D Cheng, S Huang, F Wei - The Twelfth International Conference on …, 2023 - openreview.net
We explore how continued pre-training on domain-specific corpora influences large
language models, revealing that training on the raw corpora endows the model with domain …

Towards trustworthy and aligned machine learning: A data-centric survey with causality perspectives

H Liu, M Chaudhary, H Wang - arxiv preprint arxiv:2307.16851, 2023 - arxiv.org
The trustworthiness of machine learning has emerged as a critical topic in the field,
encompassing various applications and research areas such as robustness, security …

Large language models (LLMs): survey, technical frameworks, and future challenges

P Kumar - Artificial Intelligence Review, 2024 - Springer
Artificial intelligence (AI) has significantly impacted various fields. Large language models
(LLMs) like GPT-4, BARD, PaLM, Megatron-Turing NLG, Jurassic-1 Jumbo etc., have …

[PDF][PDF] Dynamic supplementation of federated search results for reducing hallucinations in llms

J Chen, X Huang, Y Li - 2024 - files.osf.io
The increasing use of AI-generated content has highlighted the critical issue of
hallucinations, where models produce factually incorrect or misleading outputs. Addressing …

Large Language Models Can Be Contextual Privacy Protection Learners

Y **ao, Y **, Y Bai, Y Wu, X Yang, X Luo… - Proceedings of the …, 2024 - aclanthology.org
Abstract The proliferation of Large Language Models (LLMs) has driven considerable
interest in fine-tuning them with domain-specific data to create specialized language …

Melo: Enhancing model editing with neuron-indexed dynamic lora

L Yu, Q Chen, J Zhou, L He - Proceedings of the AAAI Conference on …, 2024 - ojs.aaai.org
Large language models (LLMs) have shown great success in various Natural Language
Processing (NLP) tasks, whist they still need updates after deployment to fix errors or keep …

Parameter-efficient fine-tuning of llama for the clinical domain

AP Gema, P Minervini, L Daines, T Hope… - arxiv preprint arxiv …, 2023 - arxiv.org
Adapting pretrained language models to novel domains, such as clinical applications,
traditionally involves retraining their entire set of parameters. Parameter-Efficient Fine …