A review of formal methods applied to machine learning

C Urban, A Miné - arxiv preprint arxiv:2104.02466, 2021 - arxiv.org
We review state-of-the-art formal methods applied to the emerging field of the verification of
machine learning systems. Formal methods can provide rigorous correctness guarantees on …

An abstract domain for certifying neural networks

G Singh, T Gehr, M Püschel, M Vechev - Proceedings of the ACM on …, 2019 - dl.acm.org
We present a novel method for scalable and precise certification of deep neural networks.
The key technical insight behind our approach is a new abstract domain which combines …

Ai2: Safety and robustness certification of neural networks with abstract interpretation

T Gehr, M Mirman, D Drachsler-Cohen… - … IEEE symposium on …, 2018 - ieeexplore.ieee.org
We present AI 2, the first sound and scalable analyzer for deep neural networks. Based on
overapproximation, AI 2 can automatically prove safety properties (eg, robustness) of …

Fast and effective robustness certification

G Singh, T Gehr, M Mirman… - Advances in neural …, 2018 - proceedings.neurips.cc
We present a new method and system, called DeepZ, for certifying neural network
robustness based on abstract interpretation. Compared to state-of-the-art automated …

Differentiable abstract interpretation for provably robust neural networks

M Mirman, T Gehr, M Vechev - International Conference on …, 2018 - proceedings.mlr.press
We introduce a scalable method for training robust neural networks based on abstract
interpretation. We present several abstract transformers which balance efficiency with …

Adversarial training and provable defenses: Bridging the gap

M Balunović, M Vechev - 8th International Conference …, 2020 - research-collection.ethz.ch
We present COLT, a new method to train neural networks based on a novel combination of
adversarial training and provable defenses. The key idea is to model neural network training …

Boosting robustness certification of neural networks

G Singh, T Gehr, M Püschel… - International conference on …, 2019 - openreview.net
We present a novel approach for the certification of neural networks against adversarial
perturbations which combines scalable overapproximation methods with precise (mixed …

A review of abstraction methods toward verifying neural networks

F Boudardara, A Boussif, PJ Meyer… - ACM Transactions on …, 2024 - dl.acm.org
Neural networks as a machine learning technique are increasingly deployed in various
domains. Despite their performance and their continuous improvement, the deployment of …

Optimization and abstraction: a synergistic approach for analyzing neural network robustness

G Anderson, S Pailoor, I Dillig… - Proceedings of the 40th …, 2019 - dl.acm.org
In recent years, the notion of local robustness (or robustness for short) has emerged as a
desirable property of deep neural networks. Intuitively, robustness means that small …

Analyzing deep neural networks with symbolic propagation: Towards higher precision and faster verification

J Li, J Liu, P Yang, L Chen, X Huang… - Static Analysis: 26th …, 2019 - Springer
Deep neural networks (DNNs) have been shown lack of robustness, as they are vulnerable
to small perturbations on the inputs, which has led to safety concerns on applying DNNs to …