To generate or not? safety-driven unlearned diffusion models are still easy to generate unsafe images... for now

Y Zhang, J Jia, X Chen, A Chen, Y Zhang, J Liu… - … on Computer Vision, 2024 - Springer
The recent advances in diffusion models (DMs) have revolutionized the generation of
realistic and complex images. However, these models also introduce potential safety …

Defensive unlearning with adversarial training for robust concept erasure in diffusion models

Y Zhang, X Chen, J Jia, Y Zhang… - Advances in …, 2025 - proceedings.neurips.cc
Diffusion models (DMs) have achieved remarkable success in text-to-image generation, but
they also pose safety risks, such as the potential generation of harmful content and copyright …

Revisiting zeroth-order optimization for memory-efficient llm fine-tuning: A benchmark

Y Zhang, P Li, J Hong, J Li, Y Zhang, W Zheng… - arxiv preprint arxiv …, 2024 - arxiv.org
In the evolving landscape of natural language processing (NLP), fine-tuning pre-trained
Large Language Models (LLMs) with first-order (FO) optimizers like SGD and Adam has …

Zerog: Investigating cross-dataset zero-shot transferability in graphs

Y Li, P Wang, Z Li, JX Yu, J Li - Proceedings of the 30th ACM SIGKDD …, 2024 - dl.acm.org
With the development of foundation models such as large language models, zero-shot
transfer learning has become increasingly significant. This is highlighted by the generative …

Towards training digitally-tied analog blocks via hybrid gradient computation

T Nest, M Ernoult - Advances in Neural Information …, 2025 - proceedings.neurips.cc
Power efficiency is plateauing in the standard digital electronics realm such that new
hardware, models, and algorithms are needed to reduce the costs of AI training. The …

Dpzero: Private fine-tuning of language models without backpropagation

L Zhang, B Li, KK Thekumparampil, S Oh… - arxiv preprint arxiv …, 2023 - arxiv.org
The widespread practice of fine-tuning large language models (LLMs) on domain-specific
data faces two major challenges in memory and privacy. First, as the size of LLMs continues …

Zeroth-order fine-tuning of llms with extreme sparsity

W Guo, J Long, Y Zeng, Z Liu, X Yang, Y Ran… - arxiv preprint arxiv …, 2024 - arxiv.org
Zeroth-order optimization (ZO) is a memory-efficient strategy for fine-tuning Large Language
Models using only forward passes. However, the application of ZO fine-tuning in memory …

On unsupervised prompt learning for classification with black-box language models

ZY Zhang, J Zhang, H Yao, G Niu… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have achieved impressive success in text-formatted
learning problems, and most popular LLMs have been deployed in a black-box fashion …

Enhancing zeroth-order fine-tuning for language models with low-rank structures

Y Chen, Y Zhang, L Cao, K Yuan, Z Wen - arxiv preprint arxiv:2410.07698, 2024 - arxiv.org
Parameter-efficient fine-tuning (PEFT) significantly reduces memory costs when adapting
large language models (LLMs) for downstream applications. However, traditional first-order …

The power of few: Accelerating and enhancing data reweighting with coreset selection

M Jafari, Y Zhang, Y Zhang, S Liu - ICASSP 2024-2024 IEEE …, 2024 - ieeexplore.ieee.org
As machine learning tasks continue to evolve, the trend has been to gather larger datasets
and train increasingly larger models. While this has led to advancements in accuracy, it has …