Removing batch normalization boosts adversarial training

H Wang, A Zhang, S Zheng, X Shi… - … on Machine Learning, 2022 - proceedings.mlr.press
Adversarial training (AT) defends deep neural networks against adversarial attacks. One
challenge that limits its practical application is the performance degradation on clean …

Graph mixup with soft alignments

H Ling, Z Jiang, M Liu, S Ji… - … Conference on Machine …, 2023 - proceedings.mlr.press
We study graph data augmentation by mixup, which has been used successfully on images.
A key operation of mixup is to compute a convex combination of a pair of inputs. This …

Model orthogonalization: Class distance hardening in neural networks for better security

G Tao, Y Liu, G Shen, Q Xu, S An… - … IEEE Symposium on …, 2022 - ieeexplore.ieee.org
The distance between two classes for a deep learning classifier can be measured by the
level of difficulty in flip** all (or majority of) samples in a class to the other. The class …

A unified understanding of deep nlp models for text classification

Z Li, X Wang, W Yang, J Wu, Z Zhang… - … on Visualization and …, 2022 - ieeexplore.ieee.org
The rapid development of deep natural language processing (NLP) models for text
classification has led to an urgent need for a unified understanding of these models …

Test accuracy vs. generalization gap: Model selection in nlp without accessing training or testing data

Y Yang, R Theisen, L Hodgkinson… - Proceedings of the 29th …, 2023 - dl.acm.org
Selecting suitable architecture parameters and training hyperparameters is essential for
enhancing machine learning (ML) model performance. Several recent empirical studies …

Noisy feature mixup

SH Lim, NB Erichson, F Utrera, W Xu… - arxiv preprint arxiv …, 2021 - arxiv.org
We introduce Noisy Feature Mixup (NFM), an inexpensive yet effective method for data
augmentation that combines the best of interpolation based training and noise injection …

Robustifying models against adversarial attacks by langevin dynamics

V Srinivasan, C Rohrer, A Marban, KR Müller… - Neural Networks, 2021 - Elsevier
Adversarial attacks on deep learning models have compromised their performance
considerably. As remedies, a number of defense methods were proposed, which however …

Fantastic robustness measures: the secrets of robust generalization

H Kim, J Park, Y Choi, J Lee - Advances in Neural …, 2024 - proceedings.neurips.cc
Adversarial training has become the de-facto standard method for improving the robustness
of models against adversarial examples. However, robust overfitting remains a significant …

Center-aware adversarial augmentation for single domain generalization

T Chen, M Baktashmotlagh, Z Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Domain generalization (DG) aims to learn a model from multiple training (ie,
source) domains that can generalize well to the unseen test (ie, target) data coming from a …

A concealed poisoning attack to reduce deep neural networks' robustness against adversarial samples

J Zheng, PPK Chan, H Chi, Z He - Information Sciences, 2022 - Elsevier
A poisoning attack method manipulating the training of a model is easily to be detected
since the general performance of a model is downgraded. Although a backdoor attack only …