Chatgpt is not enough: Enhancing large language models with knowledge graphs for fact-aware language modeling
Recently, ChatGPT, a representative large language model (LLM), has gained considerable
attention due to its powerful emergent abilities. Some researchers suggest that LLMs could …
attention due to its powerful emergent abilities. Some researchers suggest that LLMs could …
Prompt as triggers for backdoor attack: Examining the vulnerability in language models
The prompt-based learning paradigm, which bridges the gap between pre-training and fine-
tuning, achieves state-of-the-art performance on several NLP tasks, particularly in few-shot …
tuning, achieves state-of-the-art performance on several NLP tasks, particularly in few-shot …
InstructDial: Improving zero and few-shot generalization in dialogue through instruction tuning
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions
are leveraged with language models to induce zero-shot performance on unseen tasks …
are leveraged with language models to induce zero-shot performance on unseen tasks …
Continual prompt tuning for dialog state tracking
A desirable dialog system should be able to continually learn new skills without forgetting
old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually …
old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually …
Dialogue summaries as dialogue states (DS2), template-guided summarization for few-shot dialogue state tracking
Annotating task-oriented dialogues is notorious for the expensive and difficult data collection
process. Few-shot dialogue state tracking (DST) is a realistic solution to this problem. In this …
process. Few-shot dialogue state tracking (DST) is a realistic solution to this problem. In this …
Proactive Conversational AI: A Comprehensive Survey of Advancements and Opportunities
Dialogue systems are designed to offer human users social support or functional services
through natural language interactions. Traditional conversation research has put significant …
through natural language interactions. Traditional conversation research has put significant …
Exploring prompt-based few-shot learning for grounded dialog generation
Dialog models can be greatly strengthened through grounding on various external
information, but grounded dialog corpora are usually not naturally accessible. In this work …
information, but grounded dialog corpora are usually not naturally accessible. In this work …
Toxicity detection with generative prompt-based inference
Due to the subtleness, implicity, and different possible interpretations perceived by different
people, detecting undesirable content from text is a nuanced difficulty. It is a long-known risk …
people, detecting undesirable content from text is a nuanced difficulty. It is a long-known risk …
Unifiedabsa: A unified absa framework based on multi-task instruction tuning
Aspect-Based Sentiment Analysis (ABSA) aims to provide fine-grained aspect-level
sentiment information. There are many ABSA tasks, and the current dominant paradigm is to …
sentiment information. There are many ABSA tasks, and the current dominant paradigm is to …
Revisit few-shot intent classification with PLMs: Direct fine-tuning vs. continual pre-training
We consider the task of few-shot intent detection, which involves training a deep learning
model to classify utterances based on their underlying intents using only a small amount of …
model to classify utterances based on their underlying intents using only a small amount of …