Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction
We introduce a multi-task setup of identifying and classifying entities, relations, and
coreference clusters in scientific articles. We create SciERC, a dataset that includes …
coreference clusters in scientific articles. We create SciERC, a dataset that includes …
[KNIHA][B] Neural network methods in natural language processing
Y Goldberg - 2017 - books.google.com
Neural networks are a family of powerful machine learning models and this book focuses on
their application to natural language data. The first half of the book (Parts I and II) covers the …
their application to natural language data. The first half of the book (Parts I and II) covers the …
Identifying beneficial task relations for multi-task learning in deep neural networks
Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing
interest due to some compelling benefits, including its potential to efficiently regularize …
interest due to some compelling benefits, including its potential to efficiently regularize …
Chains of reasoning over entities, relations, and text using recurrent neural networks
Our goal is to combine the rich multistep inference of symbolic logical reasoning with the
generalization capabilities of neural networks. We are particularly interested in complex …
generalization capabilities of neural networks. We are particularly interested in complex …
Dynet: The dynamic neural network toolkit
We describe DyNet, a toolkit for implementing neural network models based on dynamic
declaration of network structure. In the static declaration strategy that is used in toolkits like …
declaration of network structure. In the static declaration strategy that is used in toolkits like …
Imagination improves multimodal translation
D Elliott, A Kádár - arxiv preprint arxiv:1705.04350, 2017 - arxiv.org
We decompose multimodal translation into two sub-tasks: learning to translate and learning
visually grounded representations. In a multitask learning framework, translations are …
visually grounded representations. In a multitask learning framework, translations are …
Sequence classification with human attention
Learning attention functions requires large volumes of data, but many NLP tasks simulate
human behavior, and in this paper, we show that human attention really does provide a …
human behavior, and in this paper, we show that human attention really does provide a …
Bridging the gaps: Multi task learning for domain transfer of hate speech detection: Multi-task learning for domain transfer of hate speech detection
Accurately detecting hate speech using supervised classification is dependent on data that
is annotated by humans. Attaining high agreement amongst annotators though is difficult …
is annotated by humans. Attaining high agreement amongst annotators though is difficult …
Improving natural language processing tasks with human gaze-guided neural attention
A lack of corpora has so far limited advances in integrating human gaze data as a
supervisory signal in neural attention mechanisms for natural language processing (NLP) …
supervisory signal in neural attention mechanisms for natural language processing (NLP) …
ZuCo 2.0: A dataset of physiological recordings during natural reading and annotation
We recorded and preprocessed ZuCo 2.0, a new dataset of simultaneous eye-tracking and
electroencephalography during natural reading and during annotation. This corpus contains …
electroencephalography during natural reading and during annotation. This corpus contains …