Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Finetuned language models are zero-shot learners
This paper explores a simple method for improving the zero-shot learning abilities of
language models. We show that instruction tuning--finetuning language models on a …
language models. We show that instruction tuning--finetuning language models on a …
Domain-specific language model pretraining for biomedical natural language processing
Pretraining large neural language models, such as BERT, has led to impressive gains on
many natural language processing (NLP) tasks. However, most pretraining efforts focus on …
many natural language processing (NLP) tasks. However, most pretraining efforts focus on …
[KNIHA][B] Pretrained transformers for text ranking: Bert and beyond
The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in
response to a query. Although the most common formulation of text ranking is search …
response to a query. Although the most common formulation of text ranking is search …
Neural unsupervised domain adaptation in NLP---a survey
Deep neural networks excel at learning from labeled data and achieve state-of-the-art
resultson a wide array of Natural Language Processing tasks. In contrast, learning from …
resultson a wide array of Natural Language Processing tasks. In contrast, learning from …
Achieving human parity on automatic chinese to english news translation
Machine translation has made rapid advances in recent years. Millions of people are using it
today in online translation systems and mobile applications in order to communicate across …
today in online translation systems and mobile applications in order to communicate across …
Unsupervised domain clusters in pretrained language models
The notion of" in-domain data" in NLP is often over-simplistic and vague, as textual data
varies in many nuanced linguistic aspects such as topic, style or level of formality. In …
varies in many nuanced linguistic aspects such as topic, style or level of formality. In …
Using the output embedding to improve language models
We study the topmost weight matrix of neural network language models. We show that this
matrix constitutes a valid word embedding. When training language models, we recommend …
matrix constitutes a valid word embedding. When training language models, we recommend …
[PDF][PDF] Neural machine translation by jointly learning to align and translate
Neural machine translation is a recently proposed approach to machine translation. Unlike
the traditional statistical machine translation, the neural machine translation aims at building …
the traditional statistical machine translation, the neural machine translation aims at building …
Learning phrase representations using RNN encoder-decoder for statistical machine translation
In this paper, we propose a novel neural network model called RNN Encoder-Decoder that
consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols …
consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols …
A survey of domain adaptation for machine translation
Neural machine translation (NMT) is a deep learning based approach for machine
translation, which outperforms traditional statistical machine translation (SMT) and yields the …
translation, which outperforms traditional statistical machine translation (SMT) and yields the …