Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
An overview of deep semi-supervised learning
Deep neural networks demonstrated their ability to provide remarkable performances on a
wide range of supervised learning tasks (eg, image classification) when trained on extensive …
wide range of supervised learning tasks (eg, image classification) when trained on extensive …
A survey of multilingual neural machine translation
We present a survey on multilingual neural machine translation (MNMT), which has gained
a lot of traction in recent years. MNMT has been useful in improving translation quality as a …
a lot of traction in recent years. MNMT has been useful in improving translation quality as a …
Toolformer: Language models can teach themselves to use tools
Abstract Language models (LMs) exhibit remarkable abilities to solve new tasks from just a
few examples or textual instructions, especially at scale. They also, paradoxically, struggle …
few examples or textual instructions, especially at scale. They also, paradoxically, struggle …
Self-instruct: Aligning language models with self-generated instructions
Large" instruction-tuned" language models (ie, finetuned to respond to instructions) have
demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they …
demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they …
Large language models can self-improve
Large Language Models (LLMs) have achieved excellent performances in various tasks.
However, fine-tuning an LLM requires extensive supervision. Human, on the other hand …
However, fine-tuning an LLM requires extensive supervision. Human, on the other hand …
Reinforced self-training (rest) for language modeling
Reinforcement learning from human feedback (RLHF) can improve the quality of large
language model's (LLM) outputs by aligning them with human preferences. We propose a …
language model's (LLM) outputs by aligning them with human preferences. We propose a …
Want to reduce labeling cost? GPT-3 can help
Data annotation is a time-consuming and labor-intensive process for many NLP tasks.
Although there exist various methods to produce pseudo data labels, they are often task …
Although there exist various methods to produce pseudo data labels, they are often task …
Rethinking pre-training and self-training
Pre-training is a dominant paradigm in computer vision. For example, supervised ImageNet
pre-training is commonly used to initialize the backbones of object detection and …
pre-training is commonly used to initialize the backbones of object detection and …
Meta pseudo labels
Abstract We present Meta Pseudo Labels, a semi-supervised learning method that achieves
a new state-of-the-art top-1 accuracy of 90.2% on ImageNet, which is 1.6% better than the …
a new state-of-the-art top-1 accuracy of 90.2% on ImageNet, which is 1.6% better than the …
Usb: A unified semi-supervised learning benchmark for classification
Semi-supervised learning (SSL) improves model generalization by leveraging massive
unlabeled data to augment limited labeled samples. However, currently, popular SSL …
unlabeled data to augment limited labeled samples. However, currently, popular SSL …