Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Deep long-tailed learning: A survey
Deep long-tailed learning, one of the most challenging problems in visual recognition, aims
to train well-performing deep models from a large number of images that follow a long-tailed …
to train well-performing deep models from a large number of images that follow a long-tailed …
Just train twice: Improving group robustness without training group information
Standard training via empirical risk minimization (ERM) can produce models that achieve
low error on average but high error on minority groups, especially in the presence of …
low error on average but high error on minority groups, especially in the presence of …
Improving out-of-distribution robustness via selective augmentation
Abstract Machine learning algorithms typically assume that training and test examples are
drawn from the same distribution. However, distribution shift is a common problem in real …
drawn from the same distribution. However, distribution shift is a common problem in real …
Wilds: A benchmark of in-the-wild distribution shifts
Distribution shifts—where the training distribution differs from the test distribution—can
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …
Self-supervised learning is more robust to dataset imbalance
Self-supervised learning (SSL) is a scalable way to learn general visual representations
since it learns without labels. However, large-scale unlabeled datasets in the wild often have …
since it learns without labels. However, large-scale unlabeled datasets in the wild often have …
Open-world semi-supervised learning
A fundamental limitation of applying semi-supervised learning in real-world settings is the
assumption that unlabeled test data contains only classes previously encountered in the …
assumption that unlabeled test data contains only classes previously encountered in the …
Fine samples for learning with noisy labels
Modern deep neural networks (DNNs) become frail when the datasets contain noisy
(incorrect) class labels. Robust techniques in the presence of noisy labels can be …
(incorrect) class labels. Robust techniques in the presence of noisy labels can be …
Robust learning with progressive data expansion against spurious correlation
While deep learning models have shown remarkable performance in various tasks, they are
susceptible to learning non-generalizable _spurious features_ rather than the core features …
susceptible to learning non-generalizable _spurious features_ rather than the core features …
Coresets for robust training of deep neural networks against noisy labels
Modern neural networks have the capacity to overfit noisy labels frequently found in real-
world datasets. Although great progress has been made, existing techniques are very …
world datasets. Although great progress has been made, existing techniques are very …
Investigating why contrastive learning benefits robustness against label noise
Abstract Self-supervised Contrastive Learning (CL) has been recently shown to be very
effective in preventing deep networks from overfitting noisy labels. Despite its empirical …
effective in preventing deep networks from overfitting noisy labels. Despite its empirical …