Efficient utilization of pre-trained models: A review of sentiment analysis via prompt learning
K Bu, Y Liu, X Ju - Knowledge-Based Systems, 2024 - Elsevier
Sentiment analysis is one of the traditional well-known tasks in Natural Language
Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of …
Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of …
Structured information extraction from scientific text with large language models
Extracting structured knowledge from scientific text remains a challenging task for machine
learning models. Here, we present a simple approach to joint named entity recognition and …
learning models. Here, we present a simple approach to joint named entity recognition and …
Contrastive learning reduces hallucination in conversations
Pre-trained language models (LMs) store knowledge in their parameters and can generate
informative responses when used in conversational systems. However, LMs suffer from the …
informative responses when used in conversational systems. However, LMs suffer from the …
State-of-the-art generalisation research in NLP: a taxonomy and review
The ability to generalise well is one of the primary desiderata of natural language
processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …
processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …
Proactive Conversational AI: A Comprehensive Survey of Advancements and Opportunities
Dialogue systems are designed to offer human users social support or functional services
through natural language interactions. Traditional conversation research has put significant …
through natural language interactions. Traditional conversation research has put significant …
Honest students from untrusted teachers: Learning an interpretable question-answering pipeline from a pretrained language model
Explainable question answering systems should produce not only accurate answers but
also rationales that justify their reasoning and allow humans to check their work. But what …
also rationales that justify their reasoning and allow humans to check their work. But what …
Thinksum: Probabilistic reasoning over sets using large language models
Large language models (LLMs) have a substantial capacity for high-level analogical
reasoning: reproducing patterns in linear text that occur in their training data (zero-shot …
reasoning: reproducing patterns in linear text that occur in their training data (zero-shot …
Multi-source multi-type knowledge exploration and exploitation for dialogue generation
Open-domain multi-turn dialogue generation encounters the significant challenge of lacking
various types of knowledge from diverse sources. Existing models typically focus on …
various types of knowledge from diverse sources. Existing models typically focus on …
Reprompting: Automated chain-of-thought prompt inference through gibbs sampling
We introduce Reprompting, an iterative sampling algorithm that searches for the Chain-of-
Thought (CoT) recipes for a given task without human intervention. Through Gibbs sampling …
Thought (CoT) recipes for a given task without human intervention. Through Gibbs sampling …
Prompt-Based Monte-Carlo Tree Search for Goal-oriented Dialogue Policy Planning
Planning for goal-oriented dialogue often requires simulating future dialogue interactions
and estimating task progress. Many approaches thus consider training neural networks to …
and estimating task progress. Many approaches thus consider training neural networks to …