Rationalization for explainable NLP: a survey
Recent advances in deep learning have improved the performance of many Natural
Language Processing (NLP) tasks such as translation, question-answering, and text …
Language Processing (NLP) tasks such as translation, question-answering, and text …
Explanations from large language models make small reasoners better
Integrating free-text explanations to in-context learning of large language models (LLM) is
shown to elicit strong reasoning capabilities along with reasonable explanations. In this …
shown to elicit strong reasoning capabilities along with reasonable explanations. In this …
Scalable multi-hop relational reasoning for knowledge-aware question answering
Existing work on augmenting question answering (QA) models with external knowledge (eg,
knowledge graphs) either struggle to model multi-hop relations efficiently, or lack …
knowledge graphs) either struggle to model multi-hop relations efficiently, or lack …
Pre-training text-to-text transformers for concept-centric common sense
Pre-trained language models (PTLM) have achieved impressive results in a range of natural
language understanding (NLU) and generation (NLG) tasks. However, current pre-training …
language understanding (NLU) and generation (NLG) tasks. However, current pre-training …
Commonsense knowledge transfer for pre-trained language models
Despite serving as the foundation models for a wide range of NLP benchmarks, pre-trained
language models have shown limited capabilities of acquiring implicit commonsense …
language models have shown limited capabilities of acquiring implicit commonsense …
Cross-lingual entity alignment with incidental supervision
Much research effort has been put to multilingual knowledge graph (KG) embedding
methods to address the entity alignment task, which seeks to match entities in different …
methods to address the entity alignment task, which seeks to match entities in different …
Graph reasoning for question answering with triplet retrieval
Answering complex questions often requires reasoning over knowledge graphs (KGs). State-
of-the-art methods often utilize entities in questions to retrieve local subgraphs, which are …
of-the-art methods often utilize entities in questions to retrieve local subgraphs, which are …
Unifying structure reasoning and language model pre-training for complex reasoning
Recent pre-trained language models (PLMs) equipped with foundation reasoning skills
have shown remarkable performance on downstream complex tasks. However, the …
have shown remarkable performance on downstream complex tasks. However, the …
CORN: co-reasoning network for commonsense question answering
Commonsense question answering (QA) requires machines to utilize the QA content and
external commonsense knowledge graph (KG) for reasoning when answering questions …
external commonsense knowledge graph (KG) for reasoning when answering questions …
Can Pretrained Language Models (Yet) Reason Deductively?
Acquiring factual knowledge with Pretrained Language Models (PLMs) has attracted
increasing attention, showing promising performance in many knowledge-intensive tasks …
increasing attention, showing promising performance in many knowledge-intensive tasks …