Learning from noisy labels with deep neural networks: A survey

H Song, M Kim, D Park, Y Shin… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Deep learning has achieved remarkable success in numerous domains with help from large
amounts of big data. However, the quality of data labels is a concern because of the lack of …

Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity

S Zhou, C Liu, D Ye, T Zhu, W Zhou, PS Yu - ACM Computing Surveys, 2022 - dl.acm.org
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …

Adversarial examples are not bugs, they are features

A Ilyas, S Santurkar, D Tsipras… - Advances in neural …, 2019 - proceedings.neurips.cc
Adversarial examples have attracted significant attention in machine learning, but the
reasons for their existence and pervasiveness remain unclear. We demonstrate that …

Label-consistent backdoor attacks

A Turner, D Tsipras, A Madry - arxiv preprint arxiv:1912.02771, 2019 - arxiv.org
Deep neural networks have been demonstrated to be vulnerable to backdoor attacks.
Specifically, by injecting a small number of maliciously constructed inputs into the training …

Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses

M Goldblum, D Tsipras, C **e, X Chen… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …

Adversarial training and robustness for multiple perturbations

F Tramer, D Boneh - Advances in neural information …, 2019 - proceedings.neurips.cc
Defenses against adversarial examples, such as adversarial training, are typically tailored to
a single perturbation type (eg, small $\ell_\infty $-noise). For other perturbations, these …

Are adversarial examples inevitable?

A Shafahi, WR Huang, C Studer, S Feizi… - arxiv preprint arxiv …, 2018 - arxiv.org
A wide range of defenses have been proposed to harden neural networks against
adversarial attacks. However, a pattern has emerged in which the majority of adversarial …

Adversarial machine learning in image classification: A survey toward the defender's perspective

GR Machado, E Silva, RR Goldschmidt - ACM Computing Surveys …, 2021 - dl.acm.org
Deep Learning algorithms have achieved state-of-the-art performance for Image
Classification. For this reason, they have been used even in security-critical applications …

Rademacher complexity for adversarially robust generalization

D Yin, R Kannan, P Bartlett - International conference on …, 2019 - proceedings.mlr.press
Many machine learning models are vulnerable to adversarial attacks; for example, adding
adversarial perturbations that are imperceptible to humans can often make machine …

Feature purification: How adversarial training performs robust deep learning

Z Allen-Zhu, Y Li - 2021 IEEE 62nd Annual Symposium on …, 2022 - ieeexplore.ieee.org
Despite the empirical success of using adversarial training to defend deep learning models
against adversarial perturbations, so far, it still remains rather unclear what the principles are …