OMG-ATTACK: Self-Supervised On-Manifold Generation of Transferable Evasion Attacks
Evasion Attacks (EA) are used to test the robustness of trained neural networks by distorting
input data to misguide the model into incorrect classifications. Creating these attacks is a …
input data to misguide the model into incorrect classifications. Creating these attacks is a …
Imperceptible and sparse adversarial attacks via a dual-population-based constrained evolutionary algorithm
The sparse adversarial attack has attracted increasing attention due to the merit of a low
attack cost via changing a small number of pixels. However, the generated adversarial …
attack cost via changing a small number of pixels. However, the generated adversarial …
An adaptive black-box defense against trojan attacks (trojdef)
Trojan backdoor is a poisoning attack against neural network (NN) classifiers in which
adversaries try to exploit the (highly desirable) model reuse property to implant Trojans into …
adversaries try to exploit the (highly desirable) model reuse property to implant Trojans into …
Explore Adversarial Attack via Black Box Variational Inference
From the perspective of probability, we propose a new method for black-box adversarial
attack via black-box variational inference (BBVI), where the knowledge of victim model is …
attack via black-box variational inference (BBVI), where the knowledge of victim model is …
An Adaptive Black-box Defense against Trojan Attacks on Text Data
Trojan backdoor is a poisoning attack against Neural Network (NN) classifiers in which
adversaries try to exploit the (highly desirable) model reuse property to implant Trojans into …
adversaries try to exploit the (highly desirable) model reuse property to implant Trojans into …