Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Chain-of-thought reasoning without prompting
In enhancing the reasoning capabilities of large language models (LLMs), prior research
primarily focuses on specific prompting techniques such as few-shot or zero-shot chain-of …
primarily focuses on specific prompting techniques such as few-shot or zero-shot chain-of …
Contrastive decoding: Open-ended text generation as optimization
Given a language model (LM), maximum probability is a poor decoding objective for open-
ended generation, because it produces short and repetitive text. On the other hand …
ended generation, because it produces short and repetitive text. On the other hand …
The unreasonable effectiveness of few-shot learning for machine translation
We demonstrate the potential of few-shot translation systems, trained with unpaired
language data, for both high and low-resource language pairs. We show that with only 5 …
language data, for both high and low-resource language pairs. We show that with only 5 …
Mauve: Measuring the gap between neural text and human text using divergence frontiers
As major progress is made in open-ended text generation, measuring how close machine-
generated text is to human language remains a critical open problem. We introduce Mauve …
generated text is to human language remains a critical open problem. We introduce Mauve …
Survey of low-resource machine translation
We present a survey covering the state of the art in low-resource machine translation (MT)
research. There are currently around 7,000 languages spoken in the world and almost all …
research. There are currently around 7,000 languages spoken in the world and almost all …
Locally typical sampling
Today's probabilistic language generators fall short when it comes to producing coherent
and fluent text despite the fact that the underlying models perform well under standard …
and fluent text despite the fact that the underlying models perform well under standard …
Accelerating transformer inference for translation via parallel decoding
Autoregressive decoding limits the efficiency of transformers for Machine Translation (MT).
The community proposed specific network architectures and learning-based methods to …
The community proposed specific network architectures and learning-based methods to …
Natural language to code translation with execution
Generative models of code, pretrained on large corpora of programs, have shown great
success in translating natural language to code (Chen et al., 2021; Austin et al., 2021; Li et …
success in translating natural language to code (Chen et al., 2021; Austin et al., 2021; Li et …
Quality-aware decoding for neural machine translation
Despite the progress in machine translation quality estimation and evaluation in the last
years, decoding in neural machine translation (NMT) is mostly oblivious to this and centers …
years, decoding in neural machine translation (NMT) is mostly oblivious to this and centers …
Uncertainty estimation in autoregressive structured prediction
Uncertainty estimation is important for ensuring safety and robustness of AI systems. While
most research in the area has focused on un-structured prediction tasks, limited work has …
most research in the area has focused on un-structured prediction tasks, limited work has …