PRIMA: general and precise neural network certification via scalable convex hull approximations
Formal verification of neural networks is critical for their safe adoption in real-world
applications. However, designing a precise and scalable verifier which can handle different …
applications. However, designing a precise and scalable verifier which can handle different …
Prompt certified machine unlearning with randomized gradient smoothing and quantization
The right to be forgotten calls for efficient machine unlearning techniques that make trained
machine learning models forget a cohort of data. The combination of training and unlearning …
machine learning models forget a cohort of data. The combination of training and unlearning …
Provable adversarial robustness for group equivariant tasks: Graphs, point clouds, molecules, and more
A machine learning model is traditionally considered robust if its prediction remains (almost)
constant under input perturbations with small norm. However, real-world tasks like molecular …
constant under input perturbations with small norm. However, real-world tasks like molecular …
Mistify: Automating {DNN} Model Porting for {On-Device} Inference at the Edge
AI applications powered by deep learning inference are increasingly run natively on edge
devices to provide better interactive user experience. This often necessitates fitting a model …
devices to provide better interactive user experience. This often necessitates fitting a model …
Fast and precise certification of transformers
We present DeepT, a novel method for certifying Transformer networks based on abstract
interpretation. The key idea behind DeepT is our new Multi-norm Zonotope abstract domain …
interpretation. The key idea behind DeepT is our new Multi-norm Zonotope abstract domain …
Robustness certification for point cloud models
The use of deep 3D point cloud models in safety-critical applications, such as autonomous
driving, dictates the need to certify the robustness of these models to real-world …
driving, dictates the need to certify the robustness of these models to real-world …
Deformrs: Certifying input deformations with randomized smoothing
Deep neural networks are vulnerable to input deformations in the form of vector fields of
pixel displacements and to other parameterized geometric deformations eg translations …
pixel displacements and to other parameterized geometric deformations eg translations …
From robustness to explainability and back again
In contrast with ad-hoc methods for eXplainable Artificial Intelligence (XAI), formal
explainability offers important guarantees of rigor. However, formal explainability is hindered …
explainability offers important guarantees of rigor. However, formal explainability is hindered …
Invariance-aware randomized smoothing certificates
Building models that comply with the invariances inherent to different domains, such as
invariance under translation or rotation, is a key aspect of applying machine learning to real …
invariance under translation or rotation, is a key aspect of applying machine learning to real …
Code-level safety verification for automated driving: A case study
The formal safety analysis of automated driving vehicles poses unique challenges due to
their dynamic operating conditions and significant complexity. This paper presents a case …
their dynamic operating conditions and significant complexity. This paper presents a case …