Fine-tuning Large Language Models for Improving Factuality in Legal Question Answering

Y Hu, L Gan, W **ao, K Kuang, F Wu - arxiv preprint arxiv:2501.06521, 2025 - arxiv.org
Hallucination, or the generation of incorrect or fabricated information, remains a critical
challenge in large language models (LLMs), particularly in high-stake domains such as …