Adversarial robustness of neural networks from the perspective of lipschitz calculus: A survey

MM Zühlke, D Kudenko - ACM Computing Surveys, 2025 - dl.acm.org
We survey the adversarial robustness of neural networks from the perspective of Lipschitz
calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is …

[HTML][HTML] Chordal and factor-width decompositions for scalable semidefinite and polynomial optimization

Y Zheng, G Fantuzzi, A Papachristodoulou - Annual Reviews in Control, 2021 - Elsevier
Chordal and factor-width decomposition methods for semidefinite programming and
polynomial optimization have recently enabled the analysis and control of large-scale linear …

When deep learning meets polyhedral theory: A survey

J Huchette, G Muñoz, T Serra, C Tsay - arxiv preprint arxiv:2305.00241, 2023 - arxiv.org
In the past decade, deep learning became the prevalent methodology for predictive
modeling thanks to the remarkable accuracy of deep neural networks in tasks such as …

CS-TSSOS: Correlative and term sparsity for large-scale polynomial optimization

J Wang, V Magron, JB Lasserre, NHA Mai - ACM Transactions on …, 2022 - dl.acm.org
This work proposes a new moment-SOS hierarchy, called CS-TSSOS, for solving large-
scale sparse polynomial optimization problems. Its novelty is to exploit simultaneously …

Chordal-TSSOS: a moment-SOS hierarchy that exploits term sparsity with chordal extension

J Wang, V Magron, JB Lasserre - SIAM Journal on optimization, 2021 - SIAM
This work is a follow-up and a complement to [J. Wang, V. Magron and JB Lasserre, preprint,
arxiv: 1912.08899, 2019] where the TSSOS hierarchy was proposed for solving polynomial …

Certified robustness via dynamic margin maximization and improved lipschitz regularization

M Fazlyab, T Entesari, A Roy… - Advances in Neural …, 2023 - proceedings.neurips.cc
To improve the robustness of deep classifiers against adversarial perturbations, many
approaches have been proposed, such as designing new architectures with better …

Hybrid ISTA: Unfolding ISTA with convergence guarantees using free-form deep neural networks

Z Zheng, W Dai, D Xue, C Li, J Zou… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
It is promising to solve linear inverse problems by unfolding iterative algorithms (eg, iterative
shrinkage thresholding algorithm (ISTA)) as deep neural networks (DNNs) with learnable …

EMG-based automatic gesture recognition using lipschitz-regularized neural networks

A Neacşu, JC Pesquet, C Burileanu - ACM Transactions on Intelligent …, 2024 - dl.acm.org
This article introduces a novel approach for building a robust Automatic Gesture Recognition
system based on Surface Electromyographic (sEMG) signals, acquired at the forearm level …

Sparse noncommutative polynomial optimization

I Klep, V Magron, J Povh - Mathematical Programming, 2022 - Springer
This article focuses on optimization of polynomials in noncommuting variables, while taking
into account sparsity in the input data. A converging hierarchy of semidefinite relaxations for …