Coco-lm: Correcting and contrasting text sequences for language model pretraining
We present a self-supervised learning framework, COCO-LM, that pretrains Language
Models by COrrecting and COntrasting corrupted text sequences. Following ELECTRA-style …
Models by COrrecting and COntrasting corrupted text sequences. Following ELECTRA-style …
PAQ: 65 million probably-asked questions and what you can do with them
Abstract Open-domain Question Answering models that directly leverage question-answer
(QA) pairs, such as closed-book QA (CBQA) models and QA-pair retrievers, show promise in …
(QA) pairs, such as closed-book QA (CBQA) models and QA-pair retrievers, show promise in …
Read before generate! faithful long form question answering with machine reading
Long-form question answering (LFQA) aims to generate a paragraph-length answer for a
given question. While current work on LFQA using large pre-trained model for generation …
given question. While current work on LFQA using large pre-trained model for generation …
Salient span masking for temporal understanding
Salient Span Masking (SSM) has shown itself to be an effective strategy to improve closed-
book question answering performance. SSM extends general masked language model …
book question answering performance. SSM extends general masked language model …
Exploiting Abstract Meaning Representation for open-domain question answering
The Open-Domain Question Answering (ODQA) task involves retrieving and subsequently
generating answers from fine-grained relevant passages within a database. Current systems …
generating answers from fine-grained relevant passages within a database. Current systems …
On the influence of masking policies in intermediate pre-training
Current NLP models are predominantly trained through a two-stage" pre-train then fine-tune"
pipeline. Prior work has shown that inserting an intermediate pre-training stage, using …
pipeline. Prior work has shown that inserting an intermediate pre-training stage, using …
Saaml: A framework for semi-supervised affective adaptation via metric learning
Socially intelligent systems such as home robots should be able to perceive emotions and
social behaviors. Affect recognition datasets have limited labeled data, and existing large …
social behaviors. Affect recognition datasets have limited labeled data, and existing large …
Investigating the Gap Between Single-Hop and Multi-Hop Questions in Closed-Book Question Answering via Question Decomposition
Transformer-based language models (LMs) have been shown to perform question
answering (QA) competitively even when removing context and using only questions as …
answering (QA) competitively even when removing context and using only questions as …
[PDF][PDF] クローズドブック質問応答における言語モデルの知識強化
矢嶋梨穂 - 法政大学大学院紀要. 情報科学研究科編, 2024 - hosei.ecats-library.jp
Abstract Language models such as Chat-GPT generate new answers from the data they
learn, improving work efficiency and generating ideas. On the other hand, incorrect answers …
learn, improving work efficiency and generating ideas. On the other hand, incorrect answers …
構造化知識を内包する自然言語理解システムの構築
矢嶋梨穂, 藤田悟 - 第 85 回全国大会講演論文集, 2023 - ipsj.ixsq.nii.ac.jp
論文抄録 ニューラルネットワークを用いた質問応答では, 機械読解やオープン検索質問応答などが
研究され, **年はクローズドブック質問応答の研究が進められている. 本研究では …
研究され, **年はクローズドブック質問応答の研究が進められている. 本研究では …