Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Machine learning in python: Main developments and technology trends in data science, machine learning, and artificial intelligence

S Raschka, J Patterson, C Nolet - Information, 2020 - mdpi.com
Smarter applications are making better use of the insights gleaned from data, having an
impact on every industry and research discipline. At the core of this revolution lies the tools …

On adaptive attacks to adversarial example defenses

F Tramer, N Carlini, W Brendel… - Advances in neural …, 2020 - proceedings.neurips.cc
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to
adversarial examples. We find, however, that typical adaptive evaluations are incomplete …

Local model poisoning attacks to {Byzantine-Robust} federated learning

M Fang, X Cao, J Jia, N Gong - 29th USENIX security symposium …, 2020 - usenix.org
In federated learning, multiple client devices jointly learn a machine learning model: each
client device maintains a local model for its local training dataset, while a master device …

Certified adversarial robustness via randomized smoothing

J Cohen, E Rosenfeld, Z Kolter - international conference on …, 2019 - proceedings.mlr.press
We show how to turn any classifier that classifies well under Gaussian noise into a new
classifier that is certifiably robust to adversarial perturbations under the L2 norm. While this" …

Hopskipjumpattack: A query-efficient decision-based attack

J Chen, MI Jordan… - 2020 ieee symposium on …, 2020 - ieeexplore.ieee.org
The goal of a decision-based adversarial attack on a trained model is to generate
adversarial examples based solely on observing output labels returned by the targeted …

Provably robust deep learning via adversarially trained smoothed classifiers

H Salman, J Li, I Razenshteyn… - Advances in neural …, 2019 - proceedings.neurips.cc
Recent works have shown the effectiveness of randomized smoothing as a scalable
technique for building neural network-based classifiers that are provably robust to $\ell_2 …

Efficient neural network robustness certification with general activation functions

H Zhang, TW Weng, PY Chen… - Advances in neural …, 2018 - proceedings.neurips.cc
Finding minimum distortion of adversarial examples and thus certifying robustness in neural
networks classifiers is known to be a challenging problem. Nevertheless, recently it has …

Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models

A Salem, Y Zhang, M Humbert, P Berrang… - arxiv preprint arxiv …, 2018 - arxiv.org
Machine learning (ML) has become a core component of many real-world applications and
training data is a key factor that drives current progress. This huge success has led Internet …

Memguard: Defending against black-box membership inference attacks via adversarial examples

J Jia, A Salem, M Backes, Y Zhang… - Proceedings of the 2019 …, 2019 - dl.acm.org
In a membership inference attack, an attacker aims to infer whether a data sample is in a
target classifier's training dataset or not. Specifically, given a black-box access to the target …