Chatgpt is not enough: Enhancing large language models with knowledge graphs for fact-aware language modeling

L Yang, H Chen, Z Li, X Ding, X Wu - arxiv preprint arxiv:2306.11489, 2023 - arxiv.org
Recently, ChatGPT, a representative large language model (LLM), has gained considerable
attention due to its powerful emergent abilities. Some researchers suggest that LLMs could …

Prompt as triggers for backdoor attack: Examining the vulnerability in language models

S Zhao, J Wen, LA Tuan, J Zhao, J Fu - arxiv preprint arxiv:2305.01219, 2023 - arxiv.org
The prompt-based learning paradigm, which bridges the gap between pre-training and fine-
tuning, achieves state-of-the-art performance on several NLP tasks, particularly in few-shot …

InstructDial: Improving zero and few-shot generalization in dialogue through instruction tuning

P Gupta, C Jiao, YT Yeh, S Mehri, M Eskenazi… - arxiv preprint arxiv …, 2022 - arxiv.org
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions
are leveraged with language models to induce zero-shot performance on unseen tasks …

Continual prompt tuning for dialog state tracking

Q Zhu, B Li, F Mi, X Zhu, M Huang - arxiv preprint arxiv:2203.06654, 2022 - arxiv.org
A desirable dialog system should be able to continually learn new skills without forgetting
old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually …

Dialogue summaries as dialogue states (DS2), template-guided summarization for few-shot dialogue state tracking

J Shin, H Yu, H Moon, A Madotto, J Park - arxiv preprint arxiv:2203.01552, 2022 - arxiv.org
Annotating task-oriented dialogues is notorious for the expensive and difficult data collection
process. Few-shot dialogue state tracking (DST) is a realistic solution to this problem. In this …

Proactive Conversational AI: A Comprehensive Survey of Advancements and Opportunities

Y Deng, L Liao, W Lei, G Yang, W Lam… - ACM Transactions on …, 2025 - dl.acm.org
Dialogue systems are designed to offer human users social support or functional services
through natural language interactions. Traditional conversation research has put significant …

Exploring prompt-based few-shot learning for grounded dialog generation

C Zheng, M Huang - arxiv preprint arxiv:2109.06513, 2021 - arxiv.org
Dialog models can be greatly strengthened through grounding on various external
information, but grounded dialog corpora are usually not naturally accessible. In this work …

Toxicity detection with generative prompt-based inference

YS Wang, Y Chang - arxiv preprint arxiv:2205.12390, 2022 - arxiv.org
Due to the subtleness, implicity, and different possible interpretations perceived by different
people, detecting undesirable content from text is a nuanced difficulty. It is a long-known risk …

Unifiedabsa: A unified absa framework based on multi-task instruction tuning

Z Wang, R **a, J Yu - arxiv preprint arxiv:2211.10986, 2022 - arxiv.org
Aspect-Based Sentiment Analysis (ABSA) aims to provide fine-grained aspect-level
sentiment information. There are many ABSA tasks, and the current dominant paradigm is to …

Revisit few-shot intent classification with PLMs: Direct fine-tuning vs. continual pre-training

H Zhang, H Liang, L Zhan, A Lam, XM Wu - arxiv preprint arxiv …, 2023 - arxiv.org
We consider the task of few-shot intent detection, which involves training a deep learning
model to classify utterances based on their underlying intents using only a small amount of …