Rationalization for explainable NLP: a survey

S Gurrapu, A Kulkarni, L Huang… - Frontiers in Artificial …, 2023 - frontiersin.org
Recent advances in deep learning have improved the performance of many Natural
Language Processing (NLP) tasks such as translation, question-answering, and text …

Explanations from large language models make small reasoners better

S Li, J Chen, Y Shen, Z Chen, X Zhang, Z Li… - arxiv preprint arxiv …, 2022 - arxiv.org
Integrating free-text explanations to in-context learning of large language models (LLM) is
shown to elicit strong reasoning capabilities along with reasonable explanations. In this …

Scalable multi-hop relational reasoning for knowledge-aware question answering

Y Feng, X Chen, BY Lin, P Wang, J Yan… - arxiv preprint arxiv …, 2020 - arxiv.org
Existing work on augmenting question answering (QA) models with external knowledge (eg,
knowledge graphs) either struggle to model multi-hop relations efficiently, or lack …

Pre-training text-to-text transformers for concept-centric common sense

W Zhou, DH Lee, RK Selvam, S Lee, BY Lin… - arxiv preprint arxiv …, 2020 - arxiv.org
Pre-trained language models (PTLM) have achieved impressive results in a range of natural
language understanding (NLU) and generation (NLG) tasks. However, current pre-training …

Commonsense knowledge transfer for pre-trained language models

W Zhou, RL Bras, Y Choi - arxiv preprint arxiv:2306.02388, 2023 - arxiv.org
Despite serving as the foundation models for a wide range of NLP benchmarks, pre-trained
language models have shown limited capabilities of acquiring implicit commonsense …

Cross-lingual entity alignment with incidental supervision

M Chen, W Shi, B Zhou, D Roth - arxiv preprint arxiv:2005.00171, 2020 - arxiv.org
Much research effort has been put to multilingual knowledge graph (KG) embedding
methods to address the entity alignment task, which seeks to match entities in different …

Graph reasoning for question answering with triplet retrieval

S Li, Y Gao, H Jiang, Q Yin, Z Li, X Yan… - arxiv preprint arxiv …, 2023 - arxiv.org
Answering complex questions often requires reasoning over knowledge graphs (KGs). State-
of-the-art methods often utilize entities in questions to retrieve local subgraphs, which are …

Unifying structure reasoning and language model pre-training for complex reasoning

S Wang, Z Wei, J Xu, T Li, Z Fan - arxiv preprint arxiv:2301.08913, 2023 - arxiv.org
Recent pre-trained language models (PLMs) equipped with foundation reasoning skills
have shown remarkable performance on downstream complex tasks. However, the …

CORN: co-reasoning network for commonsense question answering

X Guan, B Cao, Q Gao, Z Yin, B Liu… - Proceedings of the 29th …, 2022 - aclanthology.org
Commonsense question answering (QA) requires machines to utilize the QA content and
external commonsense knowledge graph (KG) for reasoning when answering questions …

Can Pretrained Language Models (Yet) Reason Deductively?

Z Yuan, S Hu, I Vulić, A Korhonen, Z Meng - arxiv preprint arxiv …, 2022 - arxiv.org
Acquiring factual knowledge with Pretrained Language Models (PLMs) has attracted
increasing attention, showing promising performance in many knowledge-intensive tasks …