To generate or not? safety-driven unlearned diffusion models are still easy to generate unsafe images... for now

Y Zhang, J Jia, X Chen, A Chen, Y Zhang, J Liu… - … on Computer Vision, 2024 - Springer
The recent advances in diffusion models (DMs) have revolutionized the generation of
realistic and complex images. However, these models also introduce potential safety …

Defensive unlearning with adversarial training for robust concept erasure in diffusion models

Y Zhang, X Chen, J Jia, Y Zhang… - Advances in …, 2025 - proceedings.neurips.cc
Diffusion models (DMs) have achieved remarkable success in text-to-image generation, but
they also pose safety risks, such as the potential generation of harmful content and copyright …

Understanding and improving visual prompting: A label-map** perspective

A Chen, Y Yao, PY Chen… - Proceedings of the …, 2023 - openaccess.thecvf.com
We revisit and advance visual prompting (VP), an input prompting technique for vision tasks.
VP can reprogram a fixed, pre-trained source model to accomplish downstream tasks in the …

Revisiting zeroth-order optimization for memory-efficient llm fine-tuning: A benchmark

Y Zhang, P Li, J Hong, J Li, Y Zhang, W Zheng… - arxiv preprint arxiv …, 2024 - arxiv.org
In the evolving landscape of natural language processing (NLP), fine-tuning pre-trained
Large Language Models (LLMs) with first-order (FO) optimizers like SGD and Adam has …

Fairness reprogramming

G Zhang, Y Zhang, Y Zhang, W Fan… - Advances in neural …, 2022 - proceedings.neurips.cc
Despite a surge of recent advances in promoting machine Learning (ML) fairness, the
existing mainstream approaches mostly require training or finetuning the entire weights of …

Text-visual prompting for efficient 2d temporal video grounding

Y Zhang, X Chen, J Jia, S Liu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In this paper, we study the problem of temporal video grounding (TVG), which aims to predict
the starting/ending time points of moments described by a text sentence within a long …

Adversarial prompt tuning for vision-language models

J Zhang, X Ma, X Wang, L Qiu, J Wang… - … on Computer Vision, 2024 - Springer
With the rapid advancement of multimodal learning, pre-trained Vision-Language Models
(VLMs) such as CLIP have demonstrated remarkable capacities in bridging the gap between …

Visual prompting for adversarial robustness

A Chen, P Lorenz, Y Yao, PY Chen… - ICASSP 2023-2023 …, 2023 - ieeexplore.ieee.org
In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed,
pre-trained model at test time. Compared to conventional adversarial defenses, VP allows …

Seasoning model soups for robustness to adversarial and natural distribution shifts

F Croce, SA Rebuffi, E Shelhamer… - Proceedings of the …, 2023 - openaccess.thecvf.com
Adversarial training is widely used to make classifiers robust to a specific threat or
adversary, such as l_p-norm bounded perturbations of a given p-norm. However, existing …

Learning to learn from apis: Black-box data-free meta-learning

Z Hu, L Shen, Z Wang, B Wu… - … on Machine Learning, 2023 - proceedings.mlr.press
Data-free meta-learning (DFML) aims to enable efficient learning of new tasks by meta-
learning from a collection of pre-trained models without access to the training data. Existing …