Making ai forget you: Data deletion in machine learning

A Ginart, M Guan, G Valiant… - Advances in neural …, 2019 - proceedings.neurips.cc
Intense recent discussions have focused on how to provide individuals with control over
when their data can and cannot be used---the EU's Right To Be Forgotten regulation is an …

A closer look at smoothness in domain adversarial training

H Rangwani, SK Aithal, M Mishra… - International …, 2022 - proceedings.mlr.press
Abstract Domain adversarial training has been ubiquitous for achieving invariant
representations and is used widely for various domain adaptation tasks. In recent times …

Lower bounds for non-convex stochastic optimization

Y Arjevani, Y Carmon, JC Duchi, DJ Foster… - Mathematical …, 2023 - Springer
We lower bound the complexity of finding ϵ-stationary points (with gradient norm at most ϵ)
using stochastic first-order methods. In a well-studied model where algorithms access …

Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator

C Fang, CJ Li, Z Lin, T Zhang - Advances in neural …, 2018 - proceedings.neurips.cc
In this paper, we propose a new technique named\textit {Stochastic Path-Integrated
Differential EstimatoR}(SPIDER), which can be used to track many deterministic quantities of …

Fedpd: A federated learning framework with adaptivity to non-iid data

X Zhang, M Hong, S Dhople, W Yin… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Federated Learning (FL) is popular for communication-efficient learning from distributed
data. To utilize data at different clients without moving them to the cloud, algorithms such as …

Solving a class of non-convex min-max games using iterative first order methods

M Nouiehed, M Sanjabi, T Huang… - Advances in …, 2019 - proceedings.neurips.cc
Recent applications that arise in machine learning have surged significant interest in solving
min-max saddle point games. This problem has been extensively studied in the convex …

Near-optimal algorithms for minimax optimization

T Lin, C **, MI Jordan - Conference on learning theory, 2020 - proceedings.mlr.press
This paper resolves a longstanding open question pertaining to the design of near-optimal
first-order algorithms for smooth and strongly-convex-strongly-concave minimax problems …

Minibatch vs local sgd for heterogeneous distributed learning

BE Woodworth, KK Patel… - Advances in Neural …, 2020 - proceedings.neurips.cc
We analyze Local SGD (aka parallel or federated SGD) and Minibatch SGD in the
heterogeneous distributed setting, where each machine has access to stochastic gradient …

Why are adaptive methods good for attention models?

J Zhang, SP Karimireddy, A Veit… - Advances in …, 2020 - proceedings.neurips.cc
While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning,
adaptive methods like Clipped SGD/Adam have been observed to outperform SGD across …

Convex and non-convex optimization under generalized smoothness

H Li, J Qian, Y Tian, A Rakhlin… - Advances in Neural …, 2023 - proceedings.neurips.cc
Classical analysis of convex and non-convex optimization methods often requires the
Lipschitz continuity of the gradient, which limits the analysis to functions bounded by …