Следене
Ronghan Li
Заглавие
Позовавания
Позовавания
Година
Multi-task learning with graph attention networks for multi-domain task-oriented dialogue systems
M Zhao, L Wang, Z Jiang, R Li, X Lu, Z Hu
Knowledge-Based Systems 259, 110069, 2023
332023
Enhancing transformer-based language models with commonsense representations for knowledge-driven machine comprehension
R Li, Z Jiang, L Wang, X Lu, M Zhao, D Chen
Knowledge-Based Systems 220, 106936, 2021
182021
Asynchronous Multi-grained Graph Network For Interpretable Multi-hop Reading Comprehension.
R Li, L Wang, S Wang, Z Jiang
IJCAI, 3857-3863, 2021
182021
Directional attention weaving for text-grounded conversational question answering
R Li, Z Jiang, L Wang, X Lu, M Zhao
Neurocomputing 391, 13-24, 2020
82020
Dynamically retrieving knowledge via query generation for informative dialogue generation
Z Hu, L Wang, Y Chen, Y Liu, R Li, M Zhao, X Lu, Z Jiang
Neurocomputing 569, 127036, 2024
62024
A multiturn complementary generative framework for conversational emotion recognition
L Wang, R Li, Y Wu, Z Jiang
International Journal of Intelligent Systems 37 (9), 5643-5671, 2022
62022
Incremental BERT with commonsense representations for multi-choice reading comprehension
R Li, L Wang, Z Jiang, D Liu, M Zhao, X Lu
Multimedia Tools and Applications 80, 32311-32333, 2021
62021
Candidate-Heuristic In-Context Learning: A new framework for enhancing medical visual question answering with LLMs
X Liang, D Wang, H Zhong, Q Wang, R Li, R Jia, B Wan
Information Processing & Management 61 (5), 103805, 2024
42024
Mutually improved dense retriever and GNN-based reader for arbitrary-hop open-domain question answering
R Li, L Wang, Z Jiang, Z Hu, M Zhao, X Lu
Neural Computing and Applications 34 (14), 11831-11851, 2022
42022
FRS: A simple knowledge graph embedding model for entity prediction
LF Wang, X Lu, Z Jiang, Z Zhang, R Li, M Zhao, D Chen
Mathematical Biosciences and Engineering 16 (6), 7789-7807, 2019
42019
Representing RCPBAC (Role-Involved Conditional Purpose-Based Access Control) in Ontology and SWRL
R Li, Z Jiang, L Wang
Advances in Brain Inspired Cognitive Systems: 9th International Conference …, 2018
42018
Dialogue summarization enhanced response generation for multi-domain task-oriented dialogue systems
L Wang, M Zhao, H Ji, Z Jiang, R Li, Z Hu, X Lu
Information Processing & Management 61 (3), 103668, 2024
32024
Different paths to the same destination: Diversifying LLMs generation for multi-hop open-domain question answering
R Li, Y Wang, Z Wen, M Cui, Q Miao
Knowledge-Based Systems 309, 112789, 2025
22025
An effective context‐focused hierarchical mechanism for task‐oriented dialogue response generation
M Zhao, Z Jiang, L Wang, R Li, X Lu, Z Hu, D Chen
Computational Intelligence 38 (5), 1831-1858, 2022
22022
Divide and Conquer: Isolating Normal-Abnormal Attributes in Knowledge Graph-Enhanced Radiology Report Generation
X Liang, Y Zhang, D Wang, H Zhong, R Li, Q Wang
Proceedings of the 32nd ACM International Conference on Multimedia, 4967-4975, 2024
12024
From easy to hard: Improving personalized response generation of task-oriented dialogue systems by leveraging capacity in open-domain dialogues
M Zhao, L Wang, Z Jiang, Y Liu, R Li, Z Hu, X Lu
Knowledge-Based Systems 295, 111843, 2024
12024
Mutually improved response generation and dialogue summarization for multi-domain task-oriented dialogue systems
M Zhao, L Wang, H Ji, Z Jiang, R Li, X Lu, Z Hu
Knowledge-Based Systems 279, 110927, 2023
12023
UniRQR: A Unified Model for Retrieval Decision, Query, and Response Generation in internet-based knowledge dialogue systems
Z Hu, Y Chen, M Zhao, R Li, L Wang
Expert Systems with Applications 270, 126494, 2025
2025
Advancing Multi-Party Dialogue Systems with Speaker-ware Contrastive Learning
Z Hu, Q He, R Li, M Zhao, L Wang
arXiv preprint arXiv:2501.11292, 2025
2025
Can xLLMs Understand the Structure of Dialog? Exploring Multilingual Response Generation in Complex Scenarios
Z Hu, Y Cui, R Li, M Zhao, L Wang
arXiv preprint arXiv:2501.11269, 2025
2025
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20