AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems
Recent work has illuminated the vulnerability of speaker recognition systems (SRSs) against
adversarial attacks, raising significant security concerns in deploying SRSs. However, they …
adversarial attacks, raising significant security concerns in deploying SRSs. However, they …
Advanced evasion attacks and mitigations on practical ML‐based phishing website classifiers
Abstract Machine learning (ML) based classifiers are vulnerable to evasion attacks, as
shown by recent attacks. However, there is a lack of systematic study of evasion attacks on …
shown by recent attacks. However, there is a lack of systematic study of evasion attacks on …
Neuro-symbolic verification of deep neural networks
Formal verification has emerged as a powerful approach to ensure the safety and reliability
of deep neural networks. However, current verification tools are limited to only a handful of …
of deep neural networks. However, current verification tools are limited to only a handful of …
QVIP: an ILP-based formal verification approach for quantized neural networks
Deep learning has become a promising programming paradigm in software development,
owing to its surprising performance in solving many challenging tasks. Deep neural …
owing to its surprising performance in solving many challenging tasks. Deep neural …
Attack as defense: Characterizing adversarial examples using robustness
As a new programming paradigm, deep learning has expanded its application to many real-
world problems. At the same time, deep learning based software are found to be vulnerable …
world problems. At the same time, deep learning based software are found to be vulnerable …
BDD4BNN: a BDD-based quantitative analysis framework for binarized neural networks
Verifying and explaining the behavior of neural networks is becoming increasingly
important, especially when they are deployed in safety-critical applications. In this paper, we …
important, especially when they are deployed in safety-critical applications. In this paper, we …
QEBVerif: Quantization error bound verification of neural networks
To alleviate the practical constraints for deploying deep neural networks (DNNs) on edge
devices, quantization is widely regarded as one promising technique. It reduces the …
devices, quantization is widely regarded as one promising technique. It reduces the …
Attack as detection: Using adversarial attack methods to detect abnormal examples
As a new programming paradigm, deep learning (DL) has achieved impressive performance
in areas such as image processing and speech recognition, and has expanded its …
in areas such as image processing and speech recognition, and has expanded its …
CLEVEREST: accelerating CEGAR-based neural network verification via adversarial attacks
Deep neural networks (DNNs) have achieved remarkable performance in a myriad of
complex tasks. However, lacking of robustness and black-box nature hinder their …
complex tasks. However, lacking of robustness and black-box nature hinder their …
Eager falsification for accelerating robustness verification of deep neural networks
X Guo, W Wan, Z Zhang, M Zhang… - 2021 IEEE 32nd …, 2021 - ieeexplore.ieee.org
Formal robustness verification of deep neural networks (DNNs) is a promising approach for
achieving a provable reliability guarantee to AI-enabled software systems. Limited scalability …
achieving a provable reliability guarantee to AI-enabled software systems. Limited scalability …