DALK: Dynamic Co-Augmentation of LLMs and KG to answer Alzheimer's Disease Questions with Scientific Literature

D Li, S Yang, Z Tan, JY Baik, S Yun, J Lee… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent advancements in large language models (LLMs) have achieved promising
performances across various applications. Nonetheless, the ongoing challenge of …

Multi-modal and multi-agent systems meet rationality: A survey

B Jiang, Y **e, X Wang, WJ Su, CJ Taylor… - ICML 2024 Workshop …, 2024 - openreview.net
Rationality is characterized by logical thinking and decision-making that align with evidence
and logical rules. This quality is essential for effective problem-solving, as it ensures that …

A survey of conversational search

F Mo, K Mao, Z Zhao, H Qian, H Chen, Y Cheng… - arxiv preprint arxiv …, 2024 - arxiv.org
As a cornerstone of modern information access, search engines have become
indispensable in everyday life. With the rapid advancements in AI and natural language …

Simple is effective: The roles of graphs and large language models in knowledge-graph-based retrieval-augmented generation

M Li, S Miao, P Li - arxiv preprint arxiv:2410.20724, 2024 - arxiv.org
Large Language Models (LLMs) demonstrate strong reasoning abilities but face limitations
such as hallucinations and outdated knowledge. Knowledge Graph (KG)-based Retrieval …

From persona to personalization: A survey on role-playing language agents

J Chen, X Wang, R Xu, S Yuan, Y Zhang, W Shi… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent advancements in large language models (LLMs) have significantly boosted the rise
of Role-Playing Language Agents (RPLAs), ie, specialized AI systems designed to simulate …

Kag: Boosting llms in professional domains via knowledge augmented generation

L Liang, M Sun, Z Gui, Z Zhu, Z Jiang, L Zhong… - arxiv preprint arxiv …, 2024 - arxiv.org
The recently developed retrieval-augmented generation (RAG) technology has enabled the
efficient construction of domain-specific applications. However, it also has limitations …

Unraveling cross-modality knowledge conflicts in large vision-language models

T Zhu, Q Liu, F Wang, Z Tu, M Chen - arxiv preprint arxiv:2410.03659, 2024 - arxiv.org
Large Vision-Language Models (LVLMs) have demonstrated impressive capabilities for
capturing and reasoning over multimodal inputs. However, these models are prone to …

Making long-context language models better multi-hop reasoners

Y Li, S Liang, MR Lyu, L Wang - arxiv preprint arxiv:2408.03246, 2024 - arxiv.org
Recent advancements in long-context modeling have enhanced language models (LMs) for
complex tasks across multiple NLP applications. Despite this progress, we find that these …

Can We Rely on LLM Agents to Draft Long-Horizon Plans? Let's Take TravelPlanner as an Example

Y Chen, A Pesaranghader, T Sadhu, DH Yi - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have brought autonomous agents closer to artificial general
intelligence (AGI) due to their promising generalization and emergent capabilities. There is …

Adaptive Contrastive Decoding in Retrieval-Augmented Generation for Handling Noisy Contexts

Y Kim, HJ Kim, C Park, C Park, H Cho, J Kim… - arxiv preprint arxiv …, 2024 - arxiv.org
When using large language models (LLMs) in knowledge-intensive tasks, such as open-
domain question answering, external context can bridge the gap between external …