Corrective machine unlearning

S Goel, A Prabhu, P Torr, P Kumaraguru… - arxiv preprint arxiv …, 2024 - arxiv.org
Machine Learning models increasingly face data integrity challenges due to the use of large-
scale training datasets drawn from the Internet. We study what model developers can do if …

Removing batch normalization boosts adversarial training

H Wang, A Zhang, S Zheng, X Shi… - … on Machine Learning, 2022 - proceedings.mlr.press
Adversarial training (AT) defends deep neural networks against adversarial attacks. One
challenge that limits its practical application is the performance degradation on clean …

Exploring memorization in adversarial training

Y Dong, K Xu, X Yang, T Pang, Z Deng, H Su… - arxiv preprint arxiv …, 2021 - arxiv.org
Deep learning models have a propensity for fitting the entire training set even with random
labels, which requires memorization of every training sample. In this paper, we explore the …

It is all about data: A survey on the effects of data on adversarial robustness

P **ong, M Tegegn, JS Sarin, S Pal, J Rubin - ACM Computing Surveys, 2024 - dl.acm.org
Adversarial examples are inputs to machine learning models that an attacker has
intentionally designed to confuse the model into making a mistake. Such examples pose a …

Towards adversarial evaluations for inexact machine unlearning

S Goel, A Prabhu, A Sanyal, SN Lim, P Torr… - arxiv preprint arxiv …, 2022 - arxiv.org
Machine Learning models face increased concerns regarding the storage of personal user
data and adverse impacts of corrupted data like backdoors or systematic bias. Machine …

How unfair is private learning?

A Sanyal, Y Hu, F Yang - Uncertainty in Artificial Intelligence, 2022 - proceedings.mlr.press
As machine learning algorithms are deployed on sensitive data in critical decision making
processes, it is becoming increasingly important that they are also private and fair. In this …

Generalization Ability of Wide Neural Networks on

J Lai, M Xu, R Chen, Q Lin - arxiv preprint arxiv:2302.05933, 2023 - arxiv.org
We perform a study on the generalization ability of the wide two-layer ReLU neural network
on $\mathbb {R} $. We first establish some spectral properties of the neural tangent kernel …

Provable tradeoffs in adversarially robust classification

E Dobriban, H Hassani, D Hong… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
It is well known that machine learning methods can be vulnerable to adversarially-chosen
perturbations of their inputs. Despite significant progress in the area, foundational open …

Attack can benefit: An adversarial approach to recognizing facial expressions under noisy annotations

J Zheng, B Li, SC Zhang, S Wu, L Cao… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Abstract The real-world Facial Expression Recognition (FER) datasets usually exhibit
complex scenarios with coupled noise annotations and imbalanced classes distribution …

Benign overfitting in adversarial training of neural networks

Y Wang, K Zhang, R Arora - Forty-first International Conference on …, 2024 - openreview.net
Benign overfitting is the phenomenon wherein none of the predictors in the hypothesis class
can achieve perfect accuracy (ie, non-realizable or noisy setting), but a model that …