Sok: Certified robustness for deep neural networks

L Li, T **e, B Li - 2023 IEEE symposium on security and privacy …, 2023 - ieeexplore.ieee.org
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …

Direct parameterization of lipschitz-bounded deep networks

R Wang, I Manchester - International Conference on …, 2023 - proceedings.mlr.press
This paper introduces a new parameterization of deep neural networks (both fully-connected
and convolutional) with guaranteed $\ell^ 2$ Lipschitz bounds, ie limited sensitivity to input …

A unified algebraic perspective on lipschitz neural networks

A Araujo, A Havens, B Delattre, A Allauzen… - arxiv preprint arxiv …, 2023 - arxiv.org
Important research efforts have focused on the design and training of neural networks with a
controlled Lipschitz constant. The goal is to increase and sometimes guarantee the …

1-Lipschitz Layers Compared: Memory Speed and Certifiable Robustness

B Prach, F Brau, G Buttazzo… - Proceedings of the …, 2024 - openaccess.thecvf.com
The robustness of neural networks against input perturbations with bounded magnitude
represents a serious concern in the deployment of deep learning models in safety-critical …

Unlocking deterministic robustness certification on imagenet

K Hu, A Zou, Z Wang, K Leino… - Advances in Neural …, 2024 - proceedings.neurips.cc
Despite the promise of Lipschitz-based methods for provably-robust deep learning with
deterministic guarantees, current state-of-the-art results are limited to feed-forward …

Novel quadratic constraints for extending lipsdp beyond slope-restricted activations

P Pauli, A Havens, A Araujo, S Garg, F Khorrami… - arxiv preprint arxiv …, 2024 - arxiv.org
Recently, semidefinite programming (SDP) techniques have shown great promise in
providing accurate Lipschitz bounds for neural networks. Specifically, the LipSDP approach …

Certified robust models with slack control and large Lipschitz constants

M Losch, D Stutz, B Schiele, M Fritz - DAGM German Conference on …, 2023 - Springer
Despite recent success, state-of-the-art learning-based models remain highly vulnerable to
input changes such as adversarial examples. In order to obtain certifiable robustness …

Raising the bar for certified adversarial robustness with diffusion models

T Altstidl, D Dobre, B Eskofier, G Gidel… - arxiv preprint arxiv …, 2023 - arxiv.org
Certified defenses against adversarial attacks offer formal guarantees on the robustness of a
model, making them more reliable than empirical methods such as adversarial training …

A recipe for improved certifiable robustness: Capacity and data

K Hu, K Leino, Z Wang, M Fredrikson - arxiv preprint arxiv:2310.02513, 2023 - arxiv.org
A key challenge, supported both theoretically and empirically, is that robustness demands
greater network capacity and more data than standard training. However, effectively adding …

Towards better certified segmentation via diffusion models

O Laousy, A Araujo, G Chassagnon, MP Revel… - arxiv preprint arxiv …, 2023 - arxiv.org
The robustness of image segmentation has been an important research topic in the past few
years as segmentation models have reached production-level accuracy. However, like …