Differentially private diffusion models

T Dockhorn, T Cao, A Vahdat, K Kreis - arxiv preprint arxiv:2210.09929, 2022 - arxiv.org
While modern machine learning models rely on increasingly large training datasets, data is
often limited in privacy-sensitive domains. Generative models trained with differential privacy …

Synthetic text generation with differential privacy: A simple and practical recipe

X Yue, HA Inan, X Li, G Kumar, J McAnallen… - arxiv preprint arxiv …, 2022 - arxiv.org
Privacy concerns have attracted increasing attention in data-driven products due to the
tendency of machine learning models to memorize sensitive training data. Generating …

Differentially private optimization on large model at small cost

Z Bu, YX Wang, S Zha… - … Conference on Machine …, 2023 - proceedings.mlr.press
Differentially private (DP) optimization is the standard paradigm to learn large neural
networks that are accurate and privacy-preserving. The computational cost for DP deep …

The invisible arms race: digital trends in illicit goods trafficking and AI-enabled responses

I Mademlis, M Mancuso, C Paternoster… - … on Technology and …, 2024 - ieeexplore.ieee.org
Recent trends in the modus operandi of technologically-aware criminal groups engaged in
illicit goods trafficking (eg, firearms, drugs, cultural artifacts, etc.) have given rise to …

[HTML][HTML] Differentially private bias-term only fine-tuning of foundation models

Z Bu, YX Wang, S Zha, G Karypis - 2022 - amazon.science
We study the problem of differentially private (DP) fine-tuning of large pre-trained models–a
recent privacy-preserving approach suitable for solving downstream tasks with sensitive …

[HTML][HTML] On the convergence and calibration of deep learning with differential privacy

Z Bu, H Wang, Z Dai, Q Long - Transactions on machine learning …, 2023 - ncbi.nlm.nih.gov
Differentially private (DP) training preserves the data privacy usually at the cost of slower
convergence (and thus lower accuracy), as well as more severe mis-calibration than its non …

Vip: A differentially private foundation model for computer vision

Y Yu, M Sanjabi, Y Ma, K Chaudhuri, C Guo - arxiv preprint arxiv …, 2023 - arxiv.org
Artificial intelligence (AI) has seen a tremendous surge in capabilities thanks to the use of
foundation models trained on internet-scale data. On the flip side, the uncurated nature of …

Dp-mix: mixup-based data augmentation for differentially private learning

W Bao, F Pittaluga, VK BG… - Advances in Neural …, 2023 - proceedings.neurips.cc
Data augmentation techniques, such as image transformations and combinations, are highly
effective at improving the generalization of computer vision models, especially when training …

Exploring the benefits of visual prompting in differential privacy

Y Li, YL Tsai, CM Yu, PY Chen… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Visual Prompting (VP) is an emerging and powerful technique that allows sample-efficient
adaptation to downstream tasks by engineering a well-trained frozen source model. In this …

Individual privacy accounting for differentially private stochastic gradient descent

D Yu, G Kamath, J Kulkarni, TY Liu, J Yin… - arxiv preprint arxiv …, 2022 - arxiv.org
Differentially private stochastic gradient descent (DP-SGD) is the workhorse algorithm for
recent advances in private deep learning. It provides a single privacy guarantee to all …