Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Byzantine machine learning: A primer
The problem of Byzantine resilience in distributed machine learning, aka Byzantine machine
learning, consists of designing distributed algorithms that can train an accurate model …
learning, consists of designing distributed algorithms that can train an accurate model …
Recent advances in algorithmic high-dimensional robust statistics
I Diakonikolas, DM Kane - ar** is provably robust to label noise for overparameterized neural networks
Modern neural networks are typically trained in an over-parameterized regime where the
parameters of the model far exceed the size of the training data. Such neural networks in …
parameters of the model far exceed the size of the training data. Such neural networks in …
Certified defenses for data poisoning attacks
J Steinhardt, PWW Koh… - Advances in neural …, 2017 - proceedings.neurips.cc
Abstract Machine learning systems trained on user-provided data are susceptible to data
poisoning attacks, whereby malicious users inject false training data with the aim of …
poisoning attacks, whereby malicious users inject false training data with the aim of …
Non-convex optimization for machine learning
P Jain, P Kar - Foundations and Trends® in Machine …, 2017 - nowpublishers.com
A vast majority of machine learning algorithms train their models and perform inference by
solving optimization problems. In order to capture the learning and prediction problems …
solving optimization problems. In order to capture the learning and prediction problems …
Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data
P Zhou, SL Resendez, J Rodriguez-Romaguera… - elife, 2018 - elifesciences.org
In vivo calcium imaging through microendoscopic lenses enables imaging of previously
inaccessible neuronal populations deep within the brains of freely moving animals …
inaccessible neuronal populations deep within the brains of freely moving animals …
Byzantine stochastic gradient descent
This paper studies the problem of distributed stochastic optimization in an adversarial setting
where, out of $ m $ machines which allegedly compute stochastic gradients every iteration …
where, out of $ m $ machines which allegedly compute stochastic gradients every iteration …
Sever: A robust meta-algorithm for stochastic optimization
In high dimensions, most machine learning methods are brittle to even a small fraction of
structured outliers. To address this, we introduce a new meta-algorithm that can take in a …
structured outliers. To address this, we introduce a new meta-algorithm that can take in a …
Learning with bad training data via iterative trimmed loss minimization
Y Shen, S Sanghavi - International conference on machine …, 2019 - proceedings.mlr.press
In this paper, we study a simple and generic framework to tackle the problem of learning
model parameters when a fraction of the training samples are corrupted. Our approach is …
model parameters when a fraction of the training samples are corrupted. Our approach is …