Adversarial examples: A survey of attacks and defenses in deep learning-enabled cybersecurity systems

M Macas, C Wu, W Fuertes - Expert Systems with Applications, 2024 - Elsevier
Over the last few years, the adoption of machine learning in a wide range of domains has
been remarkable. Deep learning, in particular, has been extensively used to drive …

Adversarial machine learning in wireless communications using RF data: A review

D Adesina, CC Hsieh, YE Sagduyu… - … Surveys & Tutorials, 2022 - ieeexplore.ieee.org
Machine learning (ML) provides effective means to learn from spectrum data and solve
complex tasks involved in wireless communications. Supported by recent advances in …

Radar: Robust ai-text detection via adversarial learning

X Hu, PY Chen, TY Ho - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Recent advances in large language models (LLMs) and the intensifying popularity of
ChatGPT-like applications have blurred the boundary of high-quality text generation …

Bert-attack: Adversarial attack against bert using bert

L Li, R Ma, Q Guo, X Xue, X Qiu - arxiv preprint arxiv:2004.09984, 2020 - arxiv.org
Adversarial attacks for discrete data (such as texts) have been proved significantly more
challenging than continuous data (such as images) since it is difficult to generate adversarial …

Measure and improve robustness in NLP models: A survey

X Wang, H Wang, D Yang - arxiv preprint arxiv:2112.08313, 2021 - arxiv.org
As NLP models achieved state-of-the-art performances over benchmarks and gained wide
applications, it has been increasingly important to ensure the safe deployment of these …

Context-free word importance scores for attacking neural networks

N Shakeel, S Shakeel - Journal of Computational and …, 2022 - ojs.bonviewpress.com
Abstract Leave-One-Out (LOO) scores provide estimates of feature importance in neural
networks, for adversarial attacks. In this work, we present context-free word scores as a …

Frequency-guided word substitutions for detecting textual adversarial examples

M Mozes, P Stenetorp, B Kleinberg… - arxiv preprint arxiv …, 2020 - arxiv.org
Recent efforts have shown that neural text processing models are vulnerable to adversarial
examples, but the nature of these examples is poorly understood. In this work, we show that …

Certified robustness to text adversarial attacks by randomized [mask]

J Zeng, J Xu, X Zheng, X Huang - Computational Linguistics, 2023 - direct.mit.edu
Very recently, few certified defense methods have been developed to provably guarantee
the robustness of a text classifier to adversarial synonym substitutions. However, all the …

Semantic robustness of models of source code

J Henkel, G Ramakrishnan, Z Wang… - … on Software Analysis …, 2022 - ieeexplore.ieee.org
Deep neural networks are vulnerable to adversarial examples-small input perturbations that
result in incorrect predictions. We study this problem for models of source code, where we …

Allsh: Active learning guided by local sensitivity and hardness

S Zhang, C Gong, X Liu, P He, W Chen… - arxiv preprint arxiv …, 2022 - arxiv.org
Active learning, which effectively collects informative unlabeled data for annotation, reduces
the demand for labeled data. In this work, we propose to retrieve unlabeled samples with a …