Connecting certified and adversarial training

Y Mao, M Müller, M Fischer… - Advances in Neural …, 2023‏ - proceedings.neurips.cc
Training certifiably robust neural networks remains a notoriously hard problem. While
adversarial training optimizes under-approximations of the worst-case loss, which leads to …

Ctbench: A library and benchmark for certified training

Y Mao, S Balauca, M Vechev - arxiv preprint arxiv:2406.04848, 2024‏ - arxiv.org
Training certifiably robust neural networks is an important but challenging task. While many
algorithms for (deterministic) certified training have been proposed, they are often evaluated …

Expressive losses for verified robustness via convex combinations

A De Palma, R Bunel, K Dvijotham, MP Kumar… - arxiv preprint arxiv …, 2023‏ - arxiv.org
In order to train networks for verified adversarial robustness, it is common to over-
approximate the worst-case loss over perturbation regions, resulting in networks that attain …

Expressivity of reLU-networks under convex relaxations

M Baader, MN Müller, Y Mao, M Vechev - arxiv preprint arxiv:2311.04015, 2023‏ - arxiv.org
Convex relaxations are a key component of training and certifying provably safe neural
networks. However, despite substantial progress, a wide and poorly understood accuracy …

Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks

L Gosch, M Sabanayagam, D Ghoshdastidar… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Generalization of machine learning models can be severely compromised by data
poisoning, where adversarial changes are applied to the training data. This vulnerability has …

Improve certified training with signal-to-noise ratio loss to decrease neuron variance and increase neuron stability

T Wei, Z Wang, P Niu, A Abuduweili… - … on Machine Learning …, 2024‏ - openreview.net
Neural network robustness is a major concern in safety-critical applications. Certified
robustness provides a reliable lower bound on worst-case robustness, and certified training …

Defending against Adversarial Malware Attacks on ML-based Android Malware Detection Systems

P He, L Cavallaro, S Ji - arxiv preprint arxiv:2501.13782, 2025‏ - arxiv.org
Android malware presents a persistent threat to users' privacy and data integrity. To combat
this, researchers have proposed machine learning-based (ML-based) Android malware …

Multi-Neuron Unleashes Expressivity of ReLU Networks Under Convex Relaxation

Y Mao, Y Zhang, M Vechev - arxiv preprint arxiv:2410.06816, 2024‏ - arxiv.org
Neural work certification has established itself as a crucial tool for ensuring the robustness of
neural networks. Certification methods typically rely on convex relaxations of the feasible …

Average Certified Radius is a Poor Metric for Randomized Smoothing

C Sun, Y Mao, MN Müller, M Vechev - arxiv preprint arxiv:2410.06895, 2024‏ - arxiv.org
Randomized smoothing is a popular approach for providing certified robustness guarantees
against adversarial attacks, and has become a very active area of research. Over the past …

Make Interval Bound Propagation great again

P Krukowski, D Wilczak, J Tabor, A Bielawska… - arxiv preprint arxiv …, 2024‏ - arxiv.org
In various scenarios motivated by real life, such as medical data analysis, autonomous
driving, and adversarial training, we are interested in robust deep networks. A network is …