[HTML][HTML] Adversarial attacks and defenses in deep learning

K Ren, T Zheng, Z Qin, X Liu - Engineering, 2020 - Elsevier
With the rapid developments of artificial intelligence (AI) and deep learning (DL) techniques,
it is critical to ensure the security and robustness of the deployed algorithms. Recently, the …

Adversarial policies: Attacking deep reinforcement learning

A Gleave, M Dennis, C Wild, N Kant, S Levine… - arxiv preprint arxiv …, 2019 - arxiv.org
Deep reinforcement learning (RL) policies are known to be vulnerable to adversarial
perturbations to their observations, similar to adversarial examples for classifiers. However …

Adversarial examples make strong poisons

L Fowl, M Goldblum, P Chiang… - Advances in …, 2021 - proceedings.neurips.cc
The adversarial machine learning literature is largely partitioned into evasion attacks on
testing data and poisoning attacks on training data. In this work, we show that adversarial …

Disentangling adversarial robustness and generalization

D Stutz, M Hein, B Schiele - … of the IEEE/CVF conference on …, 2019 - openaccess.thecvf.com
Obtaining deep networks that are robust against adversarial examples and generalize well
is an open problem. A recent hypothesis even states that both robust and accurate models …

The double-edged sword of implicit bias: Generalization vs. robustness in relu networks

S Frei, G Vardi, P Bartlett… - Advances in neural …, 2023 - proceedings.neurips.cc
In this work, we study the implications of the implicit bias of gradient flow on generalization
and adversarial robustness in ReLU networks. We focus on a setting where the data …

Disco: Adversarial defense with local implicit functions

CH Ho, N Vasconcelos - Advances in neural information …, 2022 - proceedings.neurips.cc
The problem of adversarial defenses for image classification, where the goal is to robustify a
classifier against adversarial examples, is considered. Inspired by the hypothesis that these …

Relating adversarially robust generalization to flat minima

D Stutz, M Hein, B Schiele - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Adversarial training (AT) has become the de-facto standard to obtain models robust against
adversarial examples. However, AT exhibits severe robust overfitting: cross-entropy loss on …

Robot: Robustness-oriented testing for deep learning systems

J Wang, J Chen, Y Sun, X Ma, D Wang… - 2021 IEEE/ACM …, 2021 - ieeexplore.ieee.org
Recently, there has been a significant growth of interest in applying software engineering
techniques for the quality assurance of deep learning (DL) systems. One popular direction is …

Robust load forecasting towards adversarial attacks via Bayesian learning

Y Zhou, Z Ding, Q Wen, Y Wang - IEEE Transactions on Power …, 2022 - ieeexplore.ieee.org
Electric load forecasting is an essential problem for the power industry, which has a
significant impact on power system operation. Currently, deep learning is proved to be an …

The dimpled manifold model of adversarial examples in machine learning

A Shamir, O Melamed, O BenShmuel - arxiv preprint arxiv:2106.10151, 2021 - arxiv.org
The extreme fragility of deep neural networks, when presented with tiny perturbations in their
inputs, was independently discovered by several research groups in 2013. However …