Free-rider attacks on model aggregation in federated learning

Y Fraboni, R Vidal, M Lorenzi - International Conference on …, 2021 - proceedings.mlr.press
Free-rider attacks against federated learning consist in dissimulating participation to the
federated learning process with the goal of obtaining the final aggregated model without …

An sde for modeling sam: Theory and insights

EM Compagnoni, L Biggio, A Orvieto… - International …, 2023 - proceedings.mlr.press
We study the SAM (Sharpness-Aware Minimization) optimizer which has recently attracted a
lot of interest due to its increased performance over more classical variants of stochastic …

On the noisy gradient descent that generalizes as sgd

J Wu, W Hu, H **
R Maulen-Soto, J Fadili, H Attouch, P Ochs - arxiv preprint arxiv …, 2024 - arxiv.org
Our approach is part of the close link between continuous dissipative dynamical systems
and optimization algorithms. We aim to solve convex minimization problems by means of …

Convergence rates and approximation results for SGD and its continuous-time counterpart

X Fontaine, V De Bortoli… - Conference on Learning …, 2021 - proceedings.mlr.press
This paper proposes a thorough theoretical analysis of Stochastic Gradient Descent (SGD)
with non-increasing step sizes. First, we show that the recursion defining SGD can be …

Shadowing properties of optimization algorithms

A Orvieto, A Lucchi - Advances in neural information …, 2019 - proceedings.neurips.cc
Ordinary differential equation (ODE) models of gradient-based optimization methods can
provide insights into the dynamics of learning and inspire the design of new algorithms …