Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Practical and private (deep) learning without sampling or shuffling
We consider training models with differential privacy (DP) using mini-batch gradients. The
existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD) …
existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD) …
(Amplified) Banded Matrix Factorization: A unified approach to private training
Matrix factorization (MF) mechanisms for differential privacy (DP) have substantially
improved the state-of-the-art in privacy-utility-computation tradeoffs for ML applications in a …
improved the state-of-the-art in privacy-utility-computation tradeoffs for ML applications in a …
Constant matters: Fine-grained error bound on differentially private continual observation
We study fine-grained error bounds for differentially private algorithms for counting under
continual observation. Our main insight is that the matrix mechanism when using lower …
continual observation. Our main insight is that the matrix mechanism when using lower …
Improved differential privacy for sgd via optimal private linear operators on adaptive streams
Motivated by recent applications requiring differential privacy in the setting of adaptive
streams, we investigate the question of optimal instantiations of the matrix mechanism in this …
streams, we investigate the question of optimal instantiations of the matrix mechanism in this …
Almost tight error bounds on differentially private continual counting
The first large-scale deployment of private federated learning uses differentially private
counting in the continual release model as a subroutine (Google AI blog titled “Federated …
counting in the continual release model as a subroutine (Google AI blog titled “Federated …
Efficient and near-optimal noise generation for streaming differential privacy
In the task of differentially private (DP) continual counting, we receive a stream of increments
and our goal is to output an approximate running total of these increments, without revealing …
and our goal is to output an approximate running total of these increments, without revealing …
Multi-epoch matrix factorization mechanisms for private machine learning
We introduce new differentially private (DP) mechanisms for gradient-based machine
learning (ML) with multiple passes (epochs) over a dataset, substantially improving the …
learning (ML) with multiple passes (epochs) over a dataset, substantially improving the …
Correlated noise provably beats independent noise for differentially private learning
Differentially private learning algorithms inject noise into the learning process. While the
most common private learning algorithm, DP-SGD, adds independent Gaussian noise in …
most common private learning algorithm, DP-SGD, adds independent Gaussian noise in …
A smooth binary mechanism for efficient private continual observation
In privacy under continual observation we study how to release differentially private
estimates based on a dataset that evolves over time. The problem of releasing private prefix …
estimates based on a dataset that evolves over time. The problem of releasing private prefix …
Constant matters: Fine-grained Complexity of Differentially Private Continual Observation
We study fine-grained error bounds for differentially private algorithms for counting under
continual observation. Our main insight is that the matrix mechanism when using lower …
continual observation. Our main insight is that the matrix mechanism when using lower …