Differentially private diffusion models
While modern machine learning models rely on increasingly large training datasets, data is
often limited in privacy-sensitive domains. Generative models trained with differential privacy …
often limited in privacy-sensitive domains. Generative models trained with differential privacy …
Synthetic text generation with differential privacy: A simple and practical recipe
Privacy concerns have attracted increasing attention in data-driven products due to the
tendency of machine learning models to memorize sensitive training data. Generating …
tendency of machine learning models to memorize sensitive training data. Generating …
Differentially private optimization on large model at small cost
Differentially private (DP) optimization is the standard paradigm to learn large neural
networks that are accurate and privacy-preserving. The computational cost for DP deep …
networks that are accurate and privacy-preserving. The computational cost for DP deep …
The invisible arms race: digital trends in illicit goods trafficking and AI-enabled responses
I Mademlis, M Mancuso, C Paternoster… - … on Technology and …, 2024 - ieeexplore.ieee.org
Recent trends in the modus operandi of technologically-aware criminal groups engaged in
illicit goods trafficking (eg, firearms, drugs, cultural artifacts, etc.) have given rise to …
illicit goods trafficking (eg, firearms, drugs, cultural artifacts, etc.) have given rise to …
[HTML][HTML] Differentially private bias-term only fine-tuning of foundation models
We study the problem of differentially private (DP) fine-tuning of large pre-trained models–a
recent privacy-preserving approach suitable for solving downstream tasks with sensitive …
recent privacy-preserving approach suitable for solving downstream tasks with sensitive …
[HTML][HTML] On the convergence and calibration of deep learning with differential privacy
Differentially private (DP) training preserves the data privacy usually at the cost of slower
convergence (and thus lower accuracy), as well as more severe mis-calibration than its non …
convergence (and thus lower accuracy), as well as more severe mis-calibration than its non …
Vip: A differentially private foundation model for computer vision
Artificial intelligence (AI) has seen a tremendous surge in capabilities thanks to the use of
foundation models trained on internet-scale data. On the flip side, the uncurated nature of …
foundation models trained on internet-scale data. On the flip side, the uncurated nature of …
Dp-mix: mixup-based data augmentation for differentially private learning
Data augmentation techniques, such as image transformations and combinations, are highly
effective at improving the generalization of computer vision models, especially when training …
effective at improving the generalization of computer vision models, especially when training …
Exploring the benefits of visual prompting in differential privacy
Visual Prompting (VP) is an emerging and powerful technique that allows sample-efficient
adaptation to downstream tasks by engineering a well-trained frozen source model. In this …
adaptation to downstream tasks by engineering a well-trained frozen source model. In this …
Individual privacy accounting for differentially private stochastic gradient descent
Differentially private stochastic gradient descent (DP-SGD) is the workhorse algorithm for
recent advances in private deep learning. It provides a single privacy guarantee to all …
recent advances in private deep learning. It provides a single privacy guarantee to all …