A survey of knowledge-enhanced text generation
The goal of text-to-text generation is to make machines express like a human in many
applications such as conversation, summarization, and translation. It is one of the most …
applications such as conversation, summarization, and translation. It is one of the most …
Knowledge-augmented methods for natural language processing
Knowledge in NLP has been a rising trend especially after the advent of large-scale pre-
trained models. Knowledge is critical to equip statistics-based models with common sense …
trained models. Knowledge is critical to equip statistics-based models with common sense …
Retrieval enhanced model for commonsense generation
Commonsense generation is a challenging task of generating a plausible sentence
describing an everyday scenario using provided concepts. Its requirement of reasoning over …
describing an everyday scenario using provided concepts. Its requirement of reasoning over …
Metric-guided distillation: Distilling knowledge from the metric to ranker and retriever for generative commonsense reasoning
Commonsense generation aims to generate a realistic sentence describing a daily scene
under the given concepts, which is very challenging, since it requires models to have …
under the given concepts, which is very challenging, since it requires models to have …
Kfcnet: Knowledge filtering and contrastive learning network for generative commonsense reasoning
Pre-trained language models have led to substantial gains over a broad range of natural
language processing (NLP) tasks, but have been shown to have limitations for natural …
language processing (NLP) tasks, but have been shown to have limitations for natural …
From relevance to utility: evidence retrieval with feedback for fact verification
Retrieval-enhanced methods have become a primary approach in fact verification (FV); it
requires reasoning over multiple retrieved pieces of evidence to verify the integrity of a …
requires reasoning over multiple retrieved pieces of evidence to verify the integrity of a …
Contextualized scene imagination for generative commonsense reasoning
Humans use natural language to compose common concepts from their environment into
plausible, day-to-day scene descriptions. However, such generative commonsense …
plausible, day-to-day scene descriptions. However, such generative commonsense …
Injecting new knowledge into large language models via supervised fine-tuning
In recent years, Large Language Models (LLMs) have shown remarkable performance in
generating human-like text, proving to be a valuable asset across various applications …
generating human-like text, proving to be a valuable asset across various applications …
Kgr4: Retrieval, retrospect, refine and rethink for commonsense generation
Generative commonsense reasoning requires machines to generate sentences describing
an everyday scenario given several concepts, which has attracted much attention recently …
an everyday scenario given several concepts, which has attracted much attention recently …
Calm-bench: A multi-task benchmark for evaluating causality-aware language models
Causal reasoning is a critical component of human cognition and is required across a range
of question-answering (QA) tasks (such as abductive reasoning, commonsense QA, and …
of question-answering (QA) tasks (such as abductive reasoning, commonsense QA, and …