Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Transferable adversarial attacks on vision transformers with token gradient regularization
Vision transformers (ViTs) have been successfully deployed in a variety of computer vision
tasks, but they are still vulnerable to adversarial samples. Transfer-based attacks use a local …
tasks, but they are still vulnerable to adversarial samples. Transfer-based attacks use a local …
Improving the transferability of adversarial samples by path-augmented method
Deep neural networks have achieved unprecedented success on diverse vision tasks.
However, they are vulnerable to adversarial noise that is imperceptible to humans. This …
However, they are vulnerable to adversarial noise that is imperceptible to humans. This …
Homophily-enhanced self-supervision for graph structure learning: Insights and directions
Graph neural networks (GNNs) have recently achieved remarkable success on a variety of
graph-related tasks, while such success relies heavily on a given graph structure that may …
graph-related tasks, while such success relies heavily on a given graph structure that may …
Beyond homophily and homogeneity assumption: Relation-based frequency adaptive graph neural networks
Graph neural networks (GNNs) have been playing important roles in various graph-related
tasks. However, most existing GNNs are based on the assumption of homophily, so they …
tasks. However, most existing GNNs are based on the assumption of homophily, so they …
A general black-box adversarial attack on graph-based fake news detectors
Graph Neural Network (GNN)-based fake news detectors apply various methods to construct
graphs, aiming to learn distinctive news embeddings for classification. Since the …
graphs, aiming to learn distinctive news embeddings for classification. Since the …
Enhancing transferability of adversarial examples through mixed-frequency inputs
Recent studies have shown that Deep Neural Networks (DNNs) are easily deceived by
adversarial examples, revealing their serious vulnerability. Due to the transferability …
adversarial examples, revealing their serious vulnerability. Due to the transferability …
Node-aware Bi-smoothing: Certified Robustness against Graph Injection Attacks
Deep Graph Learning (DGL) has emerged as a crucial technique across various domains.
However, recent studies have exposed vulnerabilities in DGL models, such as susceptibility …
However, recent studies have exposed vulnerabilities in DGL models, such as susceptibility …
Uplift modeling for target user attacks on recommender systems
Recommender systems are vulnerable to injective attacks, which inject limited fake users
into the platforms to manipulate the exposure of target items to all users. In this work, we …
into the platforms to manipulate the exposure of target items to all users. In this work, we …
[PDF][PDF] Towards Semantics-and Domain-Aware Adversarial Attacks.
Abstract Language models are known to be vulnerable to textual adversarial attacks, which
add humanimperceptible perturbations to the input to mislead DNNs. It is thus imperative to …
add humanimperceptible perturbations to the input to mislead DNNs. It is thus imperative to …
Simple and efficient partial graph adversarial attack: A new perspective
G Zhu, M Chen, C Yuan… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
As the study of graph neural networks becomes more intensive and comprehensive, their
robustness and security have received great research interest. The existing global attack …
robustness and security have received great research interest. The existing global attack …