Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
UHop: An unrestricted-hop relation extraction framework for knowledge-based question answering
In relation extraction for knowledge-based question answering, searching from one entity to
another entity via a single relation is called" one hop". In related work, an exhaustive search …
another entity via a single relation is called" one hop". In related work, an exhaustive search …
Deriving machine attention from human rationales
Attention-based models are successful when trained on large amounts of data. In this paper,
we demonstrate that even in the low-resource scenario, attention can be learned effectively …
we demonstrate that even in the low-resource scenario, attention can be learned effectively …
Improving constituency parsing with span attention
Constituency parsing is a fundamental and important task for natural language
understanding, where a good representation of contextual information can help this task. N …
understanding, where a good representation of contextual information can help this task. N …
Discrete-continuous action space policy gradient-based attention for image-text matching
Image-text matching is an important multi-modal task with massive applications. It tries to
match the image and the text with similar semantic information. Existing approaches do not …
match the image and the text with similar semantic information. Existing approaches do not …
Analytic score prediction and justification identification in automated short answer scoring
This paper provides an analytical assessment of student short answer responses with a view
to potential benefits in pedagogical contexts. We first propose and formalize two novel …
to potential benefits in pedagogical contexts. We first propose and formalize two novel …
Constituency parsing using llms
Constituency parsing is a fundamental yet unsolved natural language processing task. In
this paper, we explore the potential of recent large language models (LLMs) that have …
this paper, we explore the potential of recent large language models (LLMs) that have …
Considering nested tree structure in sentence extractive summarization with pre-trained transformer
Sentence extractive summarization shortens a document by selecting sentences for a
summary while preserving its important contents. However, constructing a coherent and …
summary while preserving its important contents. However, constructing a coherent and …
Syntactically look-ahead attention network for sentence compression
H Kamigaito, M Okumura - Proceedings of the AAAI Conference on Artificial …, 2020 - aaai.org
Sentence compression is the task of compressing a long sentence into a short one by
deleting redundant words. In sequence-to-sequence (Seq2Seq) based models, the decoder …
deleting redundant words. In sequence-to-sequence (Seq2Seq) based models, the decoder …
Perturbation-based self-supervised attention for attention bias in text classification
In text classification, the traditional attention mechanisms usually focus too much on frequent
words, and need extensive labeled data in order to learn. This article proposes a …
words, and need extensive labeled data in order to learn. This article proposes a …
Higher-order syntactic attention network for longer sentence compression
A sentence compression method using LSTM can generate fluent compressed sentences.
However, the performance of this method is significantly degraded when compressing …
However, the performance of this method is significantly degraded when compressing …