Diffusion models: A comprehensive survey of methods and applications

L Yang, Z Zhang, Y Song, S Hong, R Xu, Y Zhao… - ACM Computing …, 2023 - dl.acm.org
Diffusion models have emerged as a powerful new family of deep generative models with
record-breaking performance in many applications, including image synthesis, video …

How deep learning sees the world: A survey on adversarial attacks & defenses

JC Costa, T Roxo, H Proença, PRM Inácio - IEEE Access, 2024 - ieeexplore.ieee.org
Deep Learning is currently used to perform multiple tasks, such as object recognition, face
recognition, and natural language processing. However, Deep Neural Networks (DNNs) are …

Harmbench: A standardized evaluation framework for automated red teaming and robust refusal

M Mazeika, L Phan, X Yin, A Zou, Z Wang, N Mu… - arxiv preprint arxiv …, 2024 - arxiv.org
Automated red teaming holds substantial promise for uncovering and mitigating the risks
associated with the malicious use of large language models (LLMs), yet the field lacks a …

Wavelet-improved score-based generative model for medical imaging

W Wu, Y Wang, Q Liu, G Wang… - IEEE transactions on …, 2023 - ieeexplore.ieee.org
The score-based generative model (SGM) has demonstrated remarkable performance in
addressing challenging under-determined inverse problems in medical imaging. However …

Image hijacks: Adversarial images can control generative models at runtime

L Bailey, E Ong, S Russell, S Emmons - arxiv preprint arxiv:2309.00236, 2023 - arxiv.org
Are foundation models secure against malicious actors? In this work, we focus on the image
input to a vision-language model (VLM). We discover image hijacks, adversarial images that …

One prompt word is enough to boost adversarial robustness for pre-trained vision-language models

L Li, H Guan, J Qiu, M Spratling - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Large pre-trained Vision-Language Models (VLMs) like CLIP despite having
remarkable generalization ability are highly vulnerable to adversarial examples. This work …

Decoupled kullback-leibler divergence loss

J Cui, Z Tian, Z Zhong, X Qi, B Yu… - Advances in Neural …, 2025 - proceedings.neurips.cc
In this paper, we delve deeper into the Kullback–Leibler (KL) Divergence loss and
mathematically prove that it is equivalent to the Decoupled Kullback-Leibler (DKL) …

Robust classification via a single diffusion model

H Chen, Y Dong, Z Wang, X Yang, C Duan… - arxiv preprint arxiv …, 2023 - arxiv.org
Diffusion models have been applied to improve adversarial robustness of image classifiers
by purifying the adversarial noises or generating realistic data for adversarial training …

Toward understanding generative data augmentation

C Zheng, G Wu, C Li - Advances in neural information …, 2023 - proceedings.neurips.cc
Generative data augmentation, which scales datasets by obtaining fake labeled examples
from a trained conditional generative model, boosts classification performance in various …

Diffusion models and semi-supervised learners benefit mutually with few labels

Z You, Y Zhong, F Bao, J Sun… - Advances in Neural …, 2023 - proceedings.neurips.cc
In an effort to further advance semi-supervised generative and classification tasks, we
propose a simple yet effective training strategy called* dual pseudo training*(DPT), built …