Improving factuality in large language models via decoding-time hallucinatory and truthful comparators

D Yang, D **ao, J Wei, M Li, Z Chen, K Li… - arxiv preprint arxiv …, 2024 - arxiv.org
Despite their remarkable capabilities, Large Language Models (LLMs) are prone to
generate responses that contradict verifiable facts, ie, unfaithful hallucination content …

Asynchronous Multimodal Video Sequence Fusion via Learning Modality-Exclusive and-Agnostic Representations

D Yang, M Li, L Qu, K Yang, P Zhai… - … on Circuits and …, 2024 - ieeexplore.ieee.org
Understanding human intentions (eg, emotions) from videos has received considerable
attention recently. Video streams generally constitute a blend of temporal data stemming …

Towards Democratization of Subspeciality Medical Expertise

JW O'Sullivan, A Palepu, K Saab, WH Weng… - arxiv preprint arxiv …, 2024 - arxiv.org
The scarcity of subspecialist medical expertise, particularly in rare, complex and life-
threatening diseases, poses a significant challenge for healthcare delivery. This issue is …

MedThink: Inducing Medical Large-scale Visual Language Models to Hallucinate Less by Thinking More

Y Jiang, J Chen, D Yang, M Li, S Wang, T Wu… - arxiv preprint arxiv …, 2024 - arxiv.org
When Large Vision Language Models (LVLMs) are applied to multimodal medical
generative tasks, they suffer from significant model hallucination issues. This severely …