Learning from noisy labels with deep neural networks: A survey
Deep learning has achieved remarkable success in numerous domains with help from large
amounts of big data. However, the quality of data labels is a concern because of the lack of …
amounts of big data. However, the quality of data labels is a concern because of the lack of …
Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …
applications in a broad set of domains. However, the potential risks caused by adversarial …
Adversarial examples are not bugs, they are features
Adversarial examples have attracted significant attention in machine learning, but the
reasons for their existence and pervasiveness remain unclear. We demonstrate that …
reasons for their existence and pervasiveness remain unclear. We demonstrate that …
Label-consistent backdoor attacks
Deep neural networks have been demonstrated to be vulnerable to backdoor attacks.
Specifically, by injecting a small number of maliciously constructed inputs into the training …
Specifically, by injecting a small number of maliciously constructed inputs into the training …
Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …
practitioners to automate and outsource the curation of training data in order to achieve state …
Adversarial training and robustness for multiple perturbations
Defenses against adversarial examples, such as adversarial training, are typically tailored to
a single perturbation type (eg, small $\ell_\infty $-noise). For other perturbations, these …
a single perturbation type (eg, small $\ell_\infty $-noise). For other perturbations, these …
Are adversarial examples inevitable?
A wide range of defenses have been proposed to harden neural networks against
adversarial attacks. However, a pattern has emerged in which the majority of adversarial …
adversarial attacks. However, a pattern has emerged in which the majority of adversarial …
Adversarial machine learning in image classification: A survey toward the defender's perspective
GR Machado, E Silva, RR Goldschmidt - ACM Computing Surveys …, 2021 - dl.acm.org
Deep Learning algorithms have achieved state-of-the-art performance for Image
Classification. For this reason, they have been used even in security-critical applications …
Classification. For this reason, they have been used even in security-critical applications …
Rademacher complexity for adversarially robust generalization
Many machine learning models are vulnerable to adversarial attacks; for example, adding
adversarial perturbations that are imperceptible to humans can often make machine …
adversarial perturbations that are imperceptible to humans can often make machine …
Feature purification: How adversarial training performs robust deep learning
Despite the empirical success of using adversarial training to defend deep learning models
against adversarial perturbations, so far, it still remains rather unclear what the principles are …
against adversarial perturbations, so far, it still remains rather unclear what the principles are …