AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems

G Chen, Z Zhao, F Song, S Chen… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Recent work has illuminated the vulnerability of speaker recognition systems (SRSs) against
adversarial attacks, raising significant security concerns in deploying SRSs. However, they …

Advanced evasion attacks and mitigations on practical ML‐based phishing website classifiers

F Song, Y Lei, S Chen, L Fan… - International Journal of …, 2021 - Wiley Online Library
Abstract Machine learning (ML) based classifiers are vulnerable to evasion attacks, as
shown by recent attacks. However, there is a lack of systematic study of evasion attacks on …

Neuro-symbolic verification of deep neural networks

X **e, K Kersting, D Neider - arxiv preprint arxiv:2203.00938, 2022 - arxiv.org
Formal verification has emerged as a powerful approach to ensure the safety and reliability
of deep neural networks. However, current verification tools are limited to only a handful of …

QVIP: an ILP-based formal verification approach for quantized neural networks

Y Zhang, Z Zhao, G Chen, F Song, M Zhang… - Proceedings of the 37th …, 2022 - dl.acm.org
Deep learning has become a promising programming paradigm in software development,
owing to its surprising performance in solving many challenging tasks. Deep neural …

Attack as defense: Characterizing adversarial examples using robustness

Z Zhao, G Chen, J Wang, Y Yang, F Song… - Proceedings of the 30th …, 2021 - dl.acm.org
As a new programming paradigm, deep learning has expanded its application to many real-
world problems. At the same time, deep learning based software are found to be vulnerable …

BDD4BNN: a BDD-based quantitative analysis framework for binarized neural networks

Y Zhang, Z Zhao, G Chen, F Song, T Chen - International Conference on …, 2021 - Springer
Verifying and explaining the behavior of neural networks is becoming increasingly
important, especially when they are deployed in safety-critical applications. In this paper, we …

QEBVerif: Quantization error bound verification of neural networks

Y Zhang, F Song, J Sun - International Conference on Computer Aided …, 2023 - Springer
To alleviate the practical constraints for deploying deep neural networks (DNNs) on edge
devices, quantization is widely regarded as one promising technique. It reduces the …

Attack as detection: Using adversarial attack methods to detect abnormal examples

Z Zhao, G Chen, T Liu, T Li, F Song, J Wang… - ACM Transactions on …, 2024 - dl.acm.org
As a new programming paradigm, deep learning (DL) has achieved impressive performance
in areas such as image processing and speech recognition, and has expanded its …

CLEVEREST: accelerating CEGAR-based neural network verification via adversarial attacks

Z Zhao, Y Zhang, G Chen, F Song, T Chen… - International Static …, 2022 - Springer
Deep neural networks (DNNs) have achieved remarkable performance in a myriad of
complex tasks. However, lacking of robustness and black-box nature hinder their …

Eager falsification for accelerating robustness verification of deep neural networks

X Guo, W Wan, Z Zhang, M Zhang… - 2021 IEEE 32nd …, 2021 - ieeexplore.ieee.org
Formal robustness verification of deep neural networks (DNNs) is a promising approach for
achieving a provable reliability guarantee to AI-enabled software systems. Limited scalability …