Retrieval augmented language model pre-training

K Guu, K Lee, Z Tung, P Pasupat… - … on machine learning, 2020 - proceedings.mlr.press
Abstract Language model pre-training has been shown to capture a surprising amount of
world knowledge, crucial for NLP tasks such as question answering. However, this …

Latent retrieval for weakly supervised open domain question answering

K Lee, MW Chang, K Toutanova - arxiv preprint arxiv:1906.00300, 2019 - arxiv.org
Recent work on open domain question answering (QA) assumes strong supervision of the
supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve …

Spanbert: Improving pre-training by representing and predicting spans

M Joshi, D Chen, Y Liu, DS Weld… - Transactions of the …, 2020 - direct.mit.edu
We present SpanBERT, a pre-training method that is designed to better represent and
predict spans of text. Our approach extends BERT by (1) masking contiguous random spans …

A joint training dual-mrc framework for aspect based sentiment analysis

Y Mao, Y Shen, C Yu, L Cai - Proceedings of the AAAI conference on …, 2021 - ojs.aaai.org
Aspect based sentiment analysis (ABSA) involves three fundamental subtasks: aspect term
extraction, opinion term extraction, and aspect-level sentiment classification. Early works …

Qanet: Combining local convolution with global self-attention for reading comprehension

AW Yu, D Dohan, MT Luong, R Zhao, K Chen… - arxiv preprint arxiv …, 2018 - arxiv.org
Current end-to-end machine reading and question answering (Q\&A) models are primarily
based on recurrent neural networks (RNNs) with attention. Despite their success, these …

End-to-end neural coreference resolution

K Lee, L He, M Lewis, L Zettlemoyer - arxiv preprint arxiv:1707.07045, 2017 - arxiv.org
We introduce the first end-to-end coreference resolution model and show that it significantly
outperforms all previous work without using a syntactic parser or hand-engineered mention …

Gated self-matching networks for reading comprehension and question answering

W Wang, N Yang, F Wei, B Chang… - Proceedings of the 55th …, 2017 - aclanthology.org
In this paper, we present the gated self-matching networks for reading comprehension style
question answering, which aims to answer questions from a given passage. We first match …

Zero-shot relation extraction via reading comprehension

O Levy, M Seo, E Choi, L Zettlemoyer - arxiv preprint arxiv:1706.04115, 2017 - arxiv.org
We show that relation extraction can be reduced to answering simple reading
comprehension questions, by associating one or more natural-language questions with …

Certified training: Small boxes are all you need

MN Müller, F Eckert, M Fischer, M Vechev - arxiv preprint arxiv …, 2022 - arxiv.org
To obtain, deterministic guarantees of adversarial robustness, specialized training methods
are used. We propose, SABR, a novel such certified training method, based on the key …

Open-domain targeted sentiment analysis via span-based extraction and classification

M Hu, Y Peng, Z Huang, D Li, Y Lv - arxiv preprint arxiv:1906.03820, 2019 - arxiv.org
Open-domain targeted sentiment analysis aims to detect opinion targets along with their
sentiment polarities from a sentence. Prior work typically formulates this task as a sequence …