“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice

G Apruzzese, HS Anderson, S Dambra… - … IEEE conference on …, 2023 - ieeexplore.ieee.org
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …

A survey of bit-flip attacks on deep neural network and corresponding defense methods

C Qian, M Zhang, Y Nie, S Lu, H Cao - Electronics, 2023 - mdpi.com
As the machine learning-related technology has made great progress in recent years, deep
neural networks are widely used in many scenarios, including security-critical ones, which …

Microarchitectural attacks in heterogeneous systems: A survey

H Naghibijouybari, EM Koruyeh… - ACM Computing …, 2022 - dl.acm.org
With the increasing proliferation of hardware accelerators and the predicted continued
increase in the heterogeneity of future computing systems, it is necessary to understand the …

Aegis: Mitigating targeted bit-flip attacks against deep neural networks

J Wang, Z Zhang, M Wang, H Qiu, T Zhang… - 32nd USENIX Security …, 2023 - usenix.org
Bit-flip attacks (BFAs) have attracted substantial attention recently, in which an adversary
could tamper with a small number of model parameter bits to break the integrity of DNNs. To …

" Get in Researchers; We're Measuring Reproducibility": A Reproducibility Study of Machine Learning Papers in Tier 1 Security Conferences

D Olszewski, A Lu, C Stillman, K Warren… - Proceedings of the …, 2023 - dl.acm.org
Reproducibility is crucial to the advancement of science; it strengthens confidence in
seemingly contradictory results and expands the boundaries of known discoveries …

Forget and Rewire: Enhancing the Resilience of Transformer-based Models against {Bit-Flip} Attacks

N Nazari, HM Makrani, C Fang, H Sayadi… - 33rd USENIX Security …, 2024 - usenix.org
Bit-Flip Attacks (BFAs) involve adversaries manipulating a model's parameter bits to
undermine its accuracy significantly. They typically target the most vulnerable parameters …

NNSplitter: an active defense solution for DNN model via automated weight obfuscation

T Zhou, Y Luo, S Ren, X Xu - International Conference on …, 2023 - proceedings.mlr.press
As a type of valuable intellectual property (IP), deep neural network (DNN) models have
been protected by techniques like watermarking. However, such passive model protection …

One-bit flip is all you need: When bit-flip attack meets model training

J Dong, H Qiu, Y Li, T Zhang, Y Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks (DNNs) are widely deployed on real-world devices. Concerns
regarding their security have gained great attention from researchers. Recently, a new …

Deepstrike: Remotely-guided fault injection attacks on dnn accelerator in cloud-fpga

Y Luo, C Gongye, Y Fei, X Xu - 2021 58th ACM/IEEE Design …, 2021 - ieeexplore.ieee.org
As Field-programmable gate arrays (FPGAs) are widely adopted in clouds to accelerate
Deep Neural Networks (DNN), such virtualization environments have posed many new …

Neighbors from hell: Voltage attacks against deep learning accelerators on multi-tenant FPGAs

A Boutros, M Hall, N Papernot… - … Conference on Field …, 2020 - ieeexplore.ieee.org
Field-programmable gate arrays (FPGAs) are becoming widely used accelerators for a
myriad of datacenter applications due to their flexibility and energy efficiency. Among these …