Transferable adversarial attacks on vision transformers with token gradient regularization

J Zhang, Y Huang, W Wu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Vision transformers (ViTs) have been successfully deployed in a variety of computer vision
tasks, but they are still vulnerable to adversarial samples. Transfer-based attacks use a local …

Improving the transferability of adversarial samples by path-augmented method

J Zhang, J Huang, W Wang, Y Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks have achieved unprecedented success on diverse vision tasks.
However, they are vulnerable to adversarial noise that is imperceptible to humans. This …

Homophily-enhanced self-supervision for graph structure learning: Insights and directions

L Wu, H Lin, Z Liu, Z Liu, Y Huang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Graph neural networks (GNNs) have recently achieved remarkable success on a variety of
graph-related tasks, while such success relies heavily on a given graph structure that may …

Beyond homophily and homogeneity assumption: Relation-based frequency adaptive graph neural networks

L Wu, H Lin, B Hu, C Tan, Z Gao… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Graph neural networks (GNNs) have been playing important roles in various graph-related
tasks. However, most existing GNNs are based on the assumption of homophily, so they …

A general black-box adversarial attack on graph-based fake news detectors

P Zhu, Z Pan, Y Liu, J Tian, K Tang, Z Wang - arxiv preprint arxiv …, 2024 - arxiv.org
Graph Neural Network (GNN)-based fake news detectors apply various methods to construct
graphs, aiming to learn distinctive news embeddings for classification. Since the …

Enhancing transferability of adversarial examples through mixed-frequency inputs

Y Qian, K Chen, B Wang, Z Gu, S Ji… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Recent studies have shown that Deep Neural Networks (DNNs) are easily deceived by
adversarial examples, revealing their serious vulnerability. Due to the transferability …

Node-aware Bi-smoothing: Certified Robustness against Graph Injection Attacks

Y Lai, Y Zhu, B Pan, K Zhou - 2024 IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep Graph Learning (DGL) has emerged as a crucial technique across various domains.
However, recent studies have exposed vulnerabilities in DGL models, such as susceptibility …

Uplift modeling for target user attacks on recommender systems

W Wang, C Wang, F Feng, W Shi, D Ding… - Proceedings of the ACM …, 2024 - dl.acm.org
Recommender systems are vulnerable to injective attacks, which inject limited fake users
into the platforms to manipulate the exposure of target items to all users. In this work, we …

[PDF][PDF] Towards Semantics-and Domain-Aware Adversarial Attacks.

J Zhang, YC Huang, W Wu, MR Lyu - IJCAI, 2023 - ijcai.org
Abstract Language models are known to be vulnerable to textual adversarial attacks, which
add humanimperceptible perturbations to the input to mislead DNNs. It is thus imperative to …

Simple and efficient partial graph adversarial attack: A new perspective

G Zhu, M Chen, C Yuan… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
As the study of graph neural networks becomes more intensive and comprehensive, their
robustness and security have received great research interest. The existing global attack …