Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Diffusion models: A comprehensive survey of methods and applications
Diffusion models have emerged as a powerful new family of deep generative models with
record-breaking performance in many applications, including image synthesis, video …
record-breaking performance in many applications, including image synthesis, video …
How deep learning sees the world: A survey on adversarial attacks & defenses
Deep Learning is currently used to perform multiple tasks, such as object recognition, face
recognition, and natural language processing. However, Deep Neural Networks (DNNs) are …
recognition, and natural language processing. However, Deep Neural Networks (DNNs) are …
Harmbench: A standardized evaluation framework for automated red teaming and robust refusal
Automated red teaming holds substantial promise for uncovering and mitigating the risks
associated with the malicious use of large language models (LLMs), yet the field lacks a …
associated with the malicious use of large language models (LLMs), yet the field lacks a …
Wavelet-improved score-based generative model for medical imaging
The score-based generative model (SGM) has demonstrated remarkable performance in
addressing challenging under-determined inverse problems in medical imaging. However …
addressing challenging under-determined inverse problems in medical imaging. However …
Image hijacks: Adversarial images can control generative models at runtime
Are foundation models secure against malicious actors? In this work, we focus on the image
input to a vision-language model (VLM). We discover image hijacks, adversarial images that …
input to a vision-language model (VLM). We discover image hijacks, adversarial images that …
One prompt word is enough to boost adversarial robustness for pre-trained vision-language models
Abstract Large pre-trained Vision-Language Models (VLMs) like CLIP despite having
remarkable generalization ability are highly vulnerable to adversarial examples. This work …
remarkable generalization ability are highly vulnerable to adversarial examples. This work …
Decoupled kullback-leibler divergence loss
In this paper, we delve deeper into the Kullback–Leibler (KL) Divergence loss and
mathematically prove that it is equivalent to the Decoupled Kullback-Leibler (DKL) …
mathematically prove that it is equivalent to the Decoupled Kullback-Leibler (DKL) …
Robust classification via a single diffusion model
Diffusion models have been applied to improve adversarial robustness of image classifiers
by purifying the adversarial noises or generating realistic data for adversarial training …
by purifying the adversarial noises or generating realistic data for adversarial training …
Toward understanding generative data augmentation
Generative data augmentation, which scales datasets by obtaining fake labeled examples
from a trained conditional generative model, boosts classification performance in various …
from a trained conditional generative model, boosts classification performance in various …
Diffusion models and semi-supervised learners benefit mutually with few labels
In an effort to further advance semi-supervised generative and classification tasks, we
propose a simple yet effective training strategy called* dual pseudo training*(DPT), built …
propose a simple yet effective training strategy called* dual pseudo training*(DPT), built …