Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Attribution and obfuscation of neural text authorship: A data mining perspective
Two interlocking research questions of growing interest and importance in privacy research
are Authorship Attribution (AA) and Authorship Obfuscation (AO). Given an artifact …
are Authorship Attribution (AA) and Authorship Obfuscation (AO). Given an artifact …
Auggpt: Leveraging chatgpt for text data augmentation
Text data augmentation is an effective strategy for overcoming the challenge of limited
sample sizes in many natural language processing (NLP) tasks. This challenge is especially …
sample sizes in many natural language processing (NLP) tasks. This challenge is especially …
Beyond english-centric multilingual machine translation
Existing work in translation demonstrated the potential of massively multilingual machine
translation by training a single model able to translate between any pair of languages …
translation by training a single model able to translate between any pair of languages …
FUDGE: Controlled text generation with future discriminators
K Yang, D Klein - arxiv preprint arxiv:2104.05218, 2021 - arxiv.org
We propose Future Discriminators for Generation (FUDGE), a flexible and modular method
for controlled text generation. Given a pre-existing model G for generating text from a …
for controlled text generation. Given a pre-existing model G for generating text from a …
How can we know what language models know?
Recent work has presented intriguing results examining the knowledge contained in
language models (LMs) by having the LM fill in the blanks of prompts such as “Obama is a …
language models (LMs) by having the LM fill in the blanks of prompts such as “Obama is a …
How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering
Recent works have shown that language models (LM) capture different types of knowledge
regarding facts or common sense. However, because no model is perfect, they still fail to …
regarding facts or common sense. However, because no model is perfect, they still fail to …
Plug and play language models: A simple approach to controlled text generation
Large transformer-based language models (LMs) trained on huge text corpora have shown
unparalleled generation capabilities. However, controlling attributes of the generated …
unparalleled generation capabilities. However, controlling attributes of the generated …
Data augmentation using pre-trained transformer models
Language model based pre-trained models such as BERT have provided significant gains
across different NLP tasks. In this paper, we study different types of transformer based pre …
across different NLP tasks. In this paper, we study different types of transformer based pre …
Findings of the 2019 conference on machine translation (WMT19)
This paper presents the results of the premier shared task organized alongside the
Conference on Machine Translation (WMT) 2019. Participants were asked to build machine …
Conference on Machine Translation (WMT) 2019. Participants were asked to build machine …
Self-guided contrastive learning for BERT sentence representations
Although BERT and its variants have reshaped the NLP landscape, it still remains unclear
how best to derive sentence embeddings from such pre-trained Transformers. In this work …
how best to derive sentence embeddings from such pre-trained Transformers. In this work …