Read and reap the rewards: Learning to play atari with the help of instruction manuals

Y Wu, Y Fan, PP Liang, A Azaria… - Advances in Neural …, 2024 - proceedings.neurips.cc
High sample complexity has long been a challenge for RL. On the other hand, humans learn
to perform tasks not only from interaction or demonstrations, but also by reading …

Mention memory: incorporating textual knowledge into transformers through entity mention attention

M De Jong, Y Zemlyanskiy, N FitzGerald, F Sha… - arxiv preprint arxiv …, 2021 - arxiv.org
Natural language understanding tasks such as open-domain question answering often
require retrieving and assimilating factual information from multiple sources. We propose to …

Fido: Fusion-in-decoder optimized for stronger performance and faster inference

M de Jong, Y Zemlyanskiy, J Ainslie… - arxiv preprint arxiv …, 2022 - arxiv.org
Fusion-in-Decoder (FiD) is a powerful retrieval-augmented language model that sets the
state-of-the-art on many knowledge-intensive NLP tasks. However, the architecture used for …

NextLevelBERT: Masked language modeling with higher-level representations for long documents

T Czinczoll, C Hönes, M Schall… - Proceedings of the 62nd …, 2024 - aclanthology.org
While (large) language models have significantly improved over the last years, they still
struggle to sensibly process long sequences found, eg, in books, due to the quadratic …

Narrative question answering with cutting-edge open-domain qa techniques: A comprehensive study

X Mou, C Yang, M Yu, B Yao, X Guo… - Transactions of the …, 2021 - direct.mit.edu
Recent advancements in open-domain question answering (ODQA), that is, finding answers
from large open-domain corpus like Wikipedia, have led to human-level performance on …