How to dp-fy ml: A practical guide to machine learning with differential privacy

N Ponomareva, H Hazimeh, A Kurakin, Z Xu… - Journal of Artificial …, 2023 - jair.org
Abstract Machine Learning (ML) models are ubiquitous in real-world applications and are a
constant focus of research. Modern ML models have become more complex, deeper, and …

Advances and open problems in federated learning

P Kairouz, HB McMahan, B Avent… - … and trends® in …, 2021 - nowpublishers.com
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …

On the convergence of adam and beyond

SJ Reddi, S Kale, S Kumar - arxiv preprint arxiv:1904.09237, 2019 - arxiv.org
Several recently proposed stochastic optimization methods that have been successfully
used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on …

Adaptive federated optimization

S Reddi, Z Charles, M Zaheer, Z Garrett, K Rush… - arxiv preprint arxiv …, 2020 - arxiv.org
Federated learning is a distributed machine learning paradigm in which a large number of
clients coordinate with a central server to learn a model without sharing their own training …

Adabelief optimizer: Adapting stepsizes by the belief in observed gradients

J Zhuang, T Tang, Y Ding… - Advances in neural …, 2020 - proceedings.neurips.cc
Most popular optimizers for deep learning can be broadly categorized as adaptive methods
(eg~ Adam) and accelerated schemes (eg~ stochastic gradient descent (SGD) with …

Decentralised learning in federated deployment environments: A system-level survey

P Bellavista, L Foschini, A Mora - ACM Computing Surveys (CSUR), 2021 - dl.acm.org
Decentralised learning is attracting more and more interest because it embodies the
principles of data minimisation and focused data collection, while favouring the transparency …

Introduction to online convex optimization

E Hazan - Foundations and Trends® in Optimization, 2016 - nowpublishers.com
This monograph portrays optimization as a process. In many practical applications the
environment is so complex that it is infeasible to lay out a comprehensive theoretical model …

[PDF][PDF] Adaptive subgradient methods for online learning and stochastic optimization.

J Duchi, E Hazan, Y Singer - Journal of machine learning research, 2011 - jmlr.org
We present a new family of subgradient methods that dynamically incorporate knowledge of
the geometry of the data observed in earlier iterations to perform more informative gradient …

Adaptive gradient methods with dynamic bound of learning rate

L Luo, Y **ong, Y Liu, X Sun - arxiv preprint arxiv:1902.09843, 2019 - arxiv.org
Adaptive optimization methods such as AdaGrad, RMSprop and Adam have been proposed
to achieve a rapid training process with an element-wise scaling term on learning rates …

The marginal value of adaptive gradient methods in machine learning

AC Wilson, R Roelofs, M Stern… - Advances in neural …, 2017 - proceedings.neurips.cc
Adaptive optimization methods, which perform local optimization with a metric constructed
from the history of iterates, are becoming increasingly popular for training deep neural …