Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Hide in thicket: Generating imperceptible and rational adversarial perturbations on 3d point clouds
Adversarial attack methods based on point manipulation for 3D point cloud classification
have revealed the fragility of 3D models yet the adversarial examples they produce are …
have revealed the fragility of 3D models yet the adversarial examples they produce are …
Vl-trojan: Multimodal instruction backdoor attacks against autoregressive visual language models
Abstract Autoregressive Visual Language Models (VLMs) demonstrate remarkable few-shot
learning capabilities within a multimodal context. Recently, multimodal instruction tuning has …
learning capabilities within a multimodal context. Recently, multimodal instruction tuning has …
Sibling-attack: Rethinking transferable adversarial attacks against face recognition
A hard challenge in develo** practical face recognition (FR) attacks is due to the black-
box nature of the target FR model, ie, inaccessible gradient and parameter information to …
box nature of the target FR model, ie, inaccessible gradient and parameter information to …
Inducing high energy-latency of large vision-language models with verbose images
Large vision-language models (VLMs) such as GPT-4 have achieved exceptional
performance across various multi-modal tasks. However, the deployment of VLMs …
performance across various multi-modal tasks. However, the deployment of VLMs …
Boosting transferability in vision-language attacks via diversification along the intersection region of adversarial trajectory
Vision-language pre-training (VLP) models exhibit remarkable capabilities in
comprehending both images and text, yet they remain susceptible to multimodal adversarial …
comprehending both images and text, yet they remain susceptible to multimodal adversarial …
Improving fast adversarial training with prior-guided knowledge
Fast adversarial training (FAT) is an efficient method to improve robustness in white-box
attack scenarios. However, the original FAT suffers from catastrophic overfitting, which …
attack scenarios. However, the original FAT suffers from catastrophic overfitting, which …
Object detectors in the open environment: Challenges, solutions, and outlook
With the emergence of foundation models, deep learning-based object detectors have
shown practical usability in closed set scenarios. However, for real-world tasks, object …
shown practical usability in closed set scenarios. However, for real-world tasks, object …
Jailbreak vision language models via bi-modal adversarial prompt
In the realm of large vision language models (LVLMs), jailbreak attacks serve as a red-
teaming approach to bypass guardrails and uncover safety implications. Existing jailbreaks …
teaming approach to bypass guardrails and uncover safety implications. Existing jailbreaks …
Revisiting backdoor attacks against large vision-language models
Instruction tuning enhances large vision-language models (LVLMs) but raises security risks
through potential backdoor attacks due to their openness. Previous backdoor studies focus …
through potential backdoor attacks due to their openness. Previous backdoor studies focus …
Does few-shot learning suffer from backdoor attacks?
The field of few-shot learning (FSL) has shown promising results in scenarios where training
data is limited, but its vulnerability to backdoor attacks remains largely unexplored. We first …
data is limited, but its vulnerability to backdoor attacks remains largely unexplored. We first …