Adversarial training for free!

A Shafahi, M Najibi, MA Ghiasi, Z Xu… - Advances in neural …, 2019 - proceedings.neurips.cc
Adversarial training, in which a network is trained on adversarial examples, is one of the few
defenses against adversarial attacks that withstands strong attacks. Unfortunately, the high …

Adversarial attacks and defenses in deep learning for image recognition: A survey

J Wang, C Wang, Q Lin, C Luo, C Wu, J Li - Neurocomputing, 2022 - Elsevier
In recent years, researches on adversarial attacks and defense mechanisms have obtained
much attention. It's observed that adversarial examples crafted with small malicious …

You only propagate once: Accelerating adversarial training via maximal principle

D Zhang, T Zhang, Y Lu, Z Zhu… - Advances in neural …, 2019 - proceedings.neurips.cc
Deep learning achieves state-of-the-art results in many tasks in computer vision and natural
language processing. However, recent works have shown that deep networks can be …

Resilience and resilient systems of artificial intelligence: taxonomy, models and methods

V Moskalenko, V Kharchenko, A Moskalenko… - Algorithms, 2023 - mdpi.com
Artificial intelligence systems are increasingly being used in industrial applications, security
and military contexts, disaster response complexes, policing and justice practices, finance …

Disentangling adversarial robustness and generalization

D Stutz, M Hein, B Schiele - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Obtaining deep networks that are robust against adversarial examples and generalize well
is an open problem. A recent hypothesis even states that both robust and accurate models …

Learning smooth neural functions via lipschitz regularization

HTD Liu, F Williams, A Jacobson, S Fidler… - ACM SIGGRAPH 2022 …, 2022 - dl.acm.org
Neural implicit fields have recently emerged as a useful representation for 3D shapes.
These fields are commonly represented as neural networks which map latent descriptors …

Improving performance of deep learning models with axiomatic attribution priors and expected gradients

G Erion, JD Janizek, P Sturmfels… - Nature machine …, 2021 - nature.com
Recent research has demonstrated that feature attribution methods for deep networks can
themselves be incorporated into training; these attribution priors optimize for a model whose …

Denoising self-attentive sequential recommendation

H Chen, Y Lin, M Pan, L Wang, CCM Yeh, X Li… - Proceedings of the 16th …, 2022 - dl.acm.org
Transformer-based sequential recommenders are very powerful for capturing both short-
term and long-term sequential item dependencies. This is mainly attributed to their unique …

A survey of regularization strategies for deep models

R Moradi, R Berangi, B Minaei - Artificial Intelligence Review, 2020 - Springer
The most critical concern in machine learning is how to make an algorithm that performs well
both on training data and new data. No free lunch theorem implies that each specific task …

[PDF][PDF] Robust learning with jacobian regularization

J Hoffman, DA Roberts, S Yaida - arxiv preprint arxiv:1908.02729, 2019 - academia.edu
Abstract Design of reliable systems must guarantee stability against input perturbations. In
machine learning, such guarantee entails preventing overfitting and ensuring robustness of …