Retrieval augmented language model pre-training
Abstract Language model pre-training has been shown to capture a surprising amount of
world knowledge, crucial for NLP tasks such as question answering. However, this …
world knowledge, crucial for NLP tasks such as question answering. However, this …
Latent retrieval for weakly supervised open domain question answering
Recent work on open domain question answering (QA) assumes strong supervision of the
supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve …
supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve …
Spanbert: Improving pre-training by representing and predicting spans
We present SpanBERT, a pre-training method that is designed to better represent and
predict spans of text. Our approach extends BERT by (1) masking contiguous random spans …
predict spans of text. Our approach extends BERT by (1) masking contiguous random spans …
A joint training dual-mrc framework for aspect based sentiment analysis
Aspect based sentiment analysis (ABSA) involves three fundamental subtasks: aspect term
extraction, opinion term extraction, and aspect-level sentiment classification. Early works …
extraction, opinion term extraction, and aspect-level sentiment classification. Early works …
Qanet: Combining local convolution with global self-attention for reading comprehension
Current end-to-end machine reading and question answering (Q\&A) models are primarily
based on recurrent neural networks (RNNs) with attention. Despite their success, these …
based on recurrent neural networks (RNNs) with attention. Despite their success, these …
End-to-end neural coreference resolution
We introduce the first end-to-end coreference resolution model and show that it significantly
outperforms all previous work without using a syntactic parser or hand-engineered mention …
outperforms all previous work without using a syntactic parser or hand-engineered mention …
Gated self-matching networks for reading comprehension and question answering
In this paper, we present the gated self-matching networks for reading comprehension style
question answering, which aims to answer questions from a given passage. We first match …
question answering, which aims to answer questions from a given passage. We first match …
Zero-shot relation extraction via reading comprehension
We show that relation extraction can be reduced to answering simple reading
comprehension questions, by associating one or more natural-language questions with …
comprehension questions, by associating one or more natural-language questions with …
Certified training: Small boxes are all you need
To obtain, deterministic guarantees of adversarial robustness, specialized training methods
are used. We propose, SABR, a novel such certified training method, based on the key …
are used. We propose, SABR, a novel such certified training method, based on the key …
Open-domain targeted sentiment analysis via span-based extraction and classification
Open-domain targeted sentiment analysis aims to detect opinion targets along with their
sentiment polarities from a sentence. Prior work typically formulates this task as a sequence …
sentiment polarities from a sentence. Prior work typically formulates this task as a sequence …