From persona to personalization: A survey on role-playing language agents

J Chen, X Wang, R Xu, S Yuan, Y Zhang, W Shi… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent advancements in large language models (LLMs) have significantly boosted the rise
of Role-Playing Language Agents (RPLAs), ie, specialized AI systems designed to simulate …

Contextualization distillation from large language model for knowledge graph completion

D Li, Z Tan, T Chen, H Liu - arxiv preprint arxiv:2402.01729, 2024 - arxiv.org
While textual information significantly enhances the performance of pre-trained language
models (PLMs) in knowledge graph completion (KGC), the static and noisy nature of existing …

DALK: Dynamic Co-Augmentation of LLMs and KG to answer Alzheimer's Disease Questions with Scientific Literature

D Li, S Yang, Z Tan, JY Baik, S Yun, J Lee… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent advancements in large language models (LLMs) have achieved promising
performances across various applications. Nonetheless, the ongoing challenge of …

A new benchmark and reverse validation method for passage-level hallucination detection

S Yang, R Sun, X Wan - arxiv preprint arxiv:2310.06498, 2023 - arxiv.org
Large Language Models (LLMs) have shown their ability to collaborate effectively with
humans in real-world scenarios. However, LLMs are apt to generate hallucinations, ie …

Evaluating character understanding of large language models via character profiling from fictional works

X Yuan, S Yuan, Y Cui, T Lin, X Wang, R Xu… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have demonstrated impressive performance and spurred
numerous AI applications, in which role-playing agents (RPAs) are particularly popular …

Balancing speciality and versatility: a coarse to fine framework for supervised fine-tuning large language model

H Zhang, Y Wu, D Li, S Yang, R Zhao, Y Jiang… - arxiv preprint arxiv …, 2024 - arxiv.org
Aligned Large Language Models (LLMs) showcase remarkable versatility, capable of
handling diverse real-world tasks. Meanwhile, aligned LLMs are also expected to exhibit …

A question-centric multi-experts contrastive learning framework for improving the accuracy and interpretability of deep sequential knowledge tracing models

H Zhang, Z Liu, C Shang, D Li, Y Jiang - arxiv preprint arxiv:2403.07322, 2024 - arxiv.org
Knowledge tracing (KT) plays a crucial role in predicting students' future performance by
analyzing their historical learning processes. Deep neural networks (DNNs) have shown …

Improving low-resource knowledge tracing tasks by supervised pre-training and importance mechanism fine-tuning

H Zhang, Z Liu, S Huang, C Shang, B Zhan… - arxiv preprint arxiv …, 2024 - arxiv.org
Knowledge tracing (KT) aims to estimate student's knowledge mastery based on their
historical interactions. Recently, the deep learning based KT (DLKT) approaches have …

Understanding multimodal deep neural networks: A concept selection view

C Shang, H Zhang, H Wen, Y Yang - arxiv preprint arxiv:2404.08964, 2024 - arxiv.org
The multimodal deep neural networks, represented by CLIP, have generated rich
downstream applications owing to their excellent performance, thus making understanding …

Smoa: Improving multi-agent large language models with sparse mixture-of-agents

D Li, Z Tan, P Qian, Y Li, KS Chaudhary, L Hu… - arxiv preprint arxiv …, 2024 - arxiv.org
While multi-agent systems have been shown to significantly enhance the performance of
Large Language Models (LLMs) across various tasks and applications, the dense …