Removing batch normalization boosts adversarial training
Adversarial training (AT) defends deep neural networks against adversarial attacks. One
challenge that limits its practical application is the performance degradation on clean …
challenge that limits its practical application is the performance degradation on clean …
Graph mixup with soft alignments
We study graph data augmentation by mixup, which has been used successfully on images.
A key operation of mixup is to compute a convex combination of a pair of inputs. This …
A key operation of mixup is to compute a convex combination of a pair of inputs. This …
Model orthogonalization: Class distance hardening in neural networks for better security
The distance between two classes for a deep learning classifier can be measured by the
level of difficulty in flip** all (or majority of) samples in a class to the other. The class …
level of difficulty in flip** all (or majority of) samples in a class to the other. The class …
A unified understanding of deep nlp models for text classification
The rapid development of deep natural language processing (NLP) models for text
classification has led to an urgent need for a unified understanding of these models …
classification has led to an urgent need for a unified understanding of these models …
Test accuracy vs. generalization gap: Model selection in nlp without accessing training or testing data
Selecting suitable architecture parameters and training hyperparameters is essential for
enhancing machine learning (ML) model performance. Several recent empirical studies …
enhancing machine learning (ML) model performance. Several recent empirical studies …
Noisy feature mixup
We introduce Noisy Feature Mixup (NFM), an inexpensive yet effective method for data
augmentation that combines the best of interpolation based training and noise injection …
augmentation that combines the best of interpolation based training and noise injection …
Robustifying models against adversarial attacks by langevin dynamics
Adversarial attacks on deep learning models have compromised their performance
considerably. As remedies, a number of defense methods were proposed, which however …
considerably. As remedies, a number of defense methods were proposed, which however …
Fantastic robustness measures: the secrets of robust generalization
Adversarial training has become the de-facto standard method for improving the robustness
of models against adversarial examples. However, robust overfitting remains a significant …
of models against adversarial examples. However, robust overfitting remains a significant …
Center-aware adversarial augmentation for single domain generalization
Abstract Domain generalization (DG) aims to learn a model from multiple training (ie,
source) domains that can generalize well to the unseen test (ie, target) data coming from a …
source) domains that can generalize well to the unseen test (ie, target) data coming from a …
A concealed poisoning attack to reduce deep neural networks' robustness against adversarial samples
A poisoning attack method manipulating the training of a model is easily to be detected
since the general performance of a model is downgraded. Although a backdoor attack only …
since the general performance of a model is downgraded. Although a backdoor attack only …