Stable neural ode with lyapunov-stable equilibrium points for defending against adversarial attacks

Q Kang, Y Song, Q Ding… - Advances in Neural …, 2021 - proceedings.neurips.cc
Deep neural networks (DNNs) are well-known to be vulnerable to adversarial attacks, where
malicious human-imperceptible perturbations are included in the input to the deep network …

AI robustness: a human-centered perspective on technological challenges and opportunities

A Tocchetti, L Corti, A Balayn, M Yurrita… - ACM Computing …, 2022 - dl.acm.org
Despite the impressive performance of Artificial Intelligence (AI) systems, their robustness
remains elusive and constitutes a key issue that impedes large-scale adoption. Besides …

A dynamical system perspective for lipschitz neural networks

L Meunier, BJ Delattre, A Araujo… - … on Machine Learning, 2022 - proceedings.mlr.press
The Lipschitz constant of neural networks has been established as a key quantity to enforce
the robustness to adversarial examples. In this paper, we tackle the problem of building $1 …

A novel time-delay neural grey model and its applications

D Lei, T Li, L Zhang, Q Liu, W Li - Expert Systems with Applications, 2024 - Elsevier
Grey system theory uses differential equations to model small sample time series to predict
the short-term development law of things in the future. Since the most classical GM (1, 1) …

Defending against adversarial attacks via neural dynamic system

X Li, Z **n, W Liu - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Although deep neural networks (DNN) have achieved great success, their applications in
safety-critical areas are hindered due to their vulnerability to adversarial attacks. Some …

TERD: A unified framework for safeguarding diffusion models against backdoors

Y Mo, H Huang, M Li, A Li, Y Wang - arxiv preprint arxiv:2409.05294, 2024 - arxiv.org
Diffusion models have achieved notable success in image generation, but they remain
highly vulnerable to backdoor attacks, which compromise their integrity by producing …

Designing Universally-Approximating Deep Neural Networks: A First-Order Optimization Approach

Z Wu, M **ao, C Fang, Z Lin - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
Universal approximation capability, also referred to as universality, is an important property
of deep neural networks, endowing them with the potency to accurately represent the …

Adversarially robust out-of-distribution detection using lyapunov-stabilized embeddings

H Mirzaei, MW Mathis - arxiv preprint arxiv:2410.10744, 2024 - arxiv.org
Despite significant advancements in out-of-distribution (OOD) detection, existing methods
still struggle to maintain robustness against adversarial attacks, compromising their …

ZeroFake: Zero-Shot Detection of Fake Images Generated and Edited by Text-to-Image Generation Models

Z Sha, Y Tan, M Li, M Backes, Y Zhang - … of the 2024 on ACM SIGSAC …, 2024 - dl.acm.org
The text-to-image generation model has attracted significant interest from both academic
and industrial communities. These models can generate the images based on the given …

Residual network with self-adaptive time step size

X Li, X Zou, W Liu - Pattern Recognition, 2025 - Elsevier
Abstract Residual Networks (ResNet) are pivotal in machine learning. The connection
between ResNets and ordinary differential equations (ODEs) has inspired enhancements of …