Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
To generate or not? safety-driven unlearned diffusion models are still easy to generate unsafe images... for now
The recent advances in diffusion models (DMs) have revolutionized the generation of
realistic and complex images. However, these models also introduce potential safety …
realistic and complex images. However, these models also introduce potential safety …
Defensive unlearning with adversarial training for robust concept erasure in diffusion models
Diffusion models (DMs) have achieved remarkable success in text-to-image generation, but
they also pose safety risks, such as the potential generation of harmful content and copyright …
they also pose safety risks, such as the potential generation of harmful content and copyright …
Visual prompting for adversarial robustness
In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed,
pre-trained model at test time. Compared to conventional adversarial defenses, VP allows …
pre-trained model at test time. Compared to conventional adversarial defenses, VP allows …
Reverse engineering of deceptions on machine-and human-centric attacks
This work presents a comprehensive exploration of Reverse Engineering of Deceptions
(RED) in the field of adversarial machine learning. It delves into the intricacies of machine …
(RED) in the field of adversarial machine learning. It delves into the intricacies of machine …
Improving adversarial robustness of medical imaging systems via adding global attention noise
Y Dai, Y Qian, F Lu, B Wang, Z Gu, W Wang… - Computers in Biology …, 2023 - Elsevier
Recent studies have found that medical images are vulnerable to adversarial attacks.
However, it is difficult to protect medical imaging systems from adversarial examples in that …
However, it is difficult to protect medical imaging systems from adversarial examples in that …
Holistic adversarial robustness of deep learning models
Adversarial robustness studies the worst-case performance of a machine learning model to
ensure safety and reliability. With the proliferation of deep-learning-based technology, the …
ensure safety and reliability. With the proliferation of deep-learning-based technology, the …
Less is more: Data pruning for faster adversarial training
Deep neural networks (DNNs) are sensitive to adversarial examples, resulting in fragile and
unreliable performance in the real world. Although adversarial training (AT) is currently one …
unreliable performance in the real world. Although adversarial training (AT) is currently one …
Neural architecture search for adversarial robustness via learnable pruning
The convincing performances of deep neural networks (DNNs) can be degraded
tremendously under malicious samples, known as adversarial examples. Besides, with the …
tremendously under malicious samples, known as adversarial examples. Besides, with the …
Uncovering Distortion Differences: A Study of Adversarial Attacks and Machine Discriminability
Deep neural networks have performed remarkably in many areas, including image-related
classification tasks. However, various studies have shown that they are vulnerable to …
classification tasks. However, various studies have shown that they are vulnerable to …
Tracing hyperparameter dependencies for model parsing via learnable graph pooling network
Model Parsing defines the research task of predicting hyperparameters of the generative
model (GM), given a generated image as input. Since a diverse set of hyperparameters is …
model (GM), given a generated image as input. Since a diverse set of hyperparameters is …