PRIMA: general and precise neural network certification via scalable convex hull approximations

MN Müller, G Makarchuk, G Singh, M Püschel… - Proceedings of the …, 2022 - dl.acm.org
Formal verification of neural networks is critical for their safe adoption in real-world
applications. However, designing a precise and scalable verifier which can handle different …

Prompt certified machine unlearning with randomized gradient smoothing and quantization

Z Zhang, Y Zhou, X Zhao, T Che… - Advances in Neural …, 2022 - proceedings.neurips.cc
The right to be forgotten calls for efficient machine unlearning techniques that make trained
machine learning models forget a cohort of data. The combination of training and unlearning …

Provable adversarial robustness for group equivariant tasks: Graphs, point clouds, molecules, and more

J Schuchardt, Y Scholten… - Advances in Neural …, 2023 - proceedings.neurips.cc
A machine learning model is traditionally considered robust if its prediction remains (almost)
constant under input perturbations with small norm. However, real-world tasks like molecular …

Mistify: Automating {DNN} Model Porting for {On-Device} Inference at the Edge

P Guo, B Hu, W Hu - 18th USENIX Symposium on Networked Systems …, 2021 - usenix.org
AI applications powered by deep learning inference are increasingly run natively on edge
devices to provide better interactive user experience. This often necessitates fitting a model …

Fast and precise certification of transformers

G Bonaert, DI Dimitrov, M Baader… - Proceedings of the 42nd …, 2021 - dl.acm.org
We present DeepT, a novel method for certifying Transformer networks based on abstract
interpretation. The key idea behind DeepT is our new Multi-norm Zonotope abstract domain …

Robustness certification for point cloud models

T Lorenz, A Ruoss, M Balunović… - Proceedings of the …, 2021 - openaccess.thecvf.com
The use of deep 3D point cloud models in safety-critical applications, such as autonomous
driving, dictates the need to certify the robustness of these models to real-world …

Deformrs: Certifying input deformations with randomized smoothing

M Alfarra, A Bibi, N Khan, PHS Torr… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Deep neural networks are vulnerable to input deformations in the form of vector fields of
pixel displacements and to other parameterized geometric deformations eg translations …

From robustness to explainability and back again

X Huang, J Marques-Silva - arxiv preprint arxiv:2306.03048, 2023 - arxiv.org
In contrast with ad-hoc methods for eXplainable Artificial Intelligence (XAI), formal
explainability offers important guarantees of rigor. However, formal explainability is hindered …

Invariance-aware randomized smoothing certificates

J Schuchardt, S Günnemann - Advances in Neural …, 2022 - proceedings.neurips.cc
Building models that comply with the invariances inherent to different domains, such as
invariance under translation or rotation, is a key aspect of applying machine learning to real …

Code-level safety verification for automated driving: A case study

V Nenchev, C Imrie, S Gerasimou… - … Symposium on Formal …, 2024 - Springer
The formal safety analysis of automated driving vehicles poses unique challenges due to
their dynamic operating conditions and significant complexity. This paper presents a case …