Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Agreement-on-the-line: Predicting the performance of neural networks under distribution shift
Recently, Miller et al. showed that a model's in-distribution (ID) accuracy has a strong linear
correlation with its out-of-distribution (OOD) accuracy, on several OOD benchmarks, a …
correlation with its out-of-distribution (OOD) accuracy, on several OOD benchmarks, a …
Id and ood performance are sometimes inversely correlated on real-world datasets
Several studies have compared the in-distribution (ID) and out-of-distribution (OOD)
performance of models in computer vision and NLP. They report a frequent positive …
performance of models in computer vision and NLP. They report a frequent positive …
T-mars: Improving visual representations by circumventing text feature learning
Large web-sourced multimodal datasets have powered a slew of new methods for learning
general-purpose visual representations, advancing the state of the art in computer vision …
general-purpose visual representations, advancing the state of the art in computer vision …
Characterizing datapoints via second-split forgetting
Researchers investigating example hardness have increasingly focused on the dynamics by
which neural networks learn and forget examples throughout training. Popular metrics …
which neural networks learn and forget examples throughout training. Popular metrics …
Understanding the detrimental class-level effects of data augmentation
Data augmentation (DA) encodes invariance and provides implicit regularization critical to a
model's performance in image classification tasks. However, while DA improves average …
model's performance in image classification tasks. However, while DA improves average …
Unlocking accuracy and fairness in differentially private image classification
Privacy-preserving machine learning aims to train models on private data without leaking
sensitive information. Differential privacy (DP) is considered the gold standard framework for …
sensitive information. Differential privacy (DP) is considered the gold standard framework for …
Tools for verifying neural models' training data
D Choi, Y Shavit, DK Duvenaud - Advances in Neural …, 2023 - proceedings.neurips.cc
It is important that consumers and regulators can verify the provenance of large neural
models to evaluate their capabilities and risks. We introduce the concept of a" Proof-of …
models to evaluate their capabilities and risks. We introduce the concept of a" Proof-of …
Protecting against simultaneous data poisoning attacks
Current backdoor defense methods are evaluated against a single attack at a time. This is
unrealistic, as powerful machine learning systems are trained on large datasets scraped …
unrealistic, as powerful machine learning systems are trained on large datasets scraped …
Reconsidering Sentence-Level Sign Language Translation
G Tanzer, M Shengelia, K Harrenstien… - arxiv preprint arxiv …, 2024 - arxiv.org
Historically, sign language machine translation has been posed as a sentence-level task:
datasets consisting of continuous narratives are chopped up and presented to the model as …
datasets consisting of continuous narratives are chopped up and presented to the model as …
Rethinking streaming machine learning evaluation
While most work on evaluating machine learning (ML) models focuses on computing
accuracy on batches of data, tracking accuracy alone in a streaming setting (ie, unbounded …
accuracy on batches of data, tracking accuracy alone in a streaming setting (ie, unbounded …