Small data challenges for intelligent prognostics and health management: a review
Prognostics and health management (PHM) is critical for enhancing equipment reliability
and reducing maintenance costs, and research on intelligent PHM has made significant …
and reducing maintenance costs, and research on intelligent PHM has made significant …
Convergence analysis of sequential federated learning on heterogeneous data
Y Li, X Lyu - Advances in Neural Information Processing …, 2024 - proceedings.neurips.cc
There are two categories of methods in Federated Learning (FL) for joint training across
multiple clients: i) parallel FL (PFL), where clients train models in a parallel manner; and ii) …
multiple clients: i) parallel FL (PFL), where clients train models in a parallel manner; and ii) …
Federated learning with client subsampling, data heterogeneity, and unbounded smoothness: A new algorithm and lower bounds
We study the problem of Federated Learning (FL) under client subsampling and data
heterogeneity with an objective function that has potentially unbounded smoothness. This …
heterogeneity with an objective function that has potentially unbounded smoothness. This …
FedNL: Making Newton-type methods applicable to federated learning
Inspired by recent work of Islamov et al (2021), we propose a family of Federated Newton
Learn (FedNL) methods, which we believe is a marked step in the direction of making …
Learn (FedNL) methods, which we believe is a marked step in the direction of making …
FedNew: A communication-efficient and privacy-preserving Newton-type method for federated learning
Newton-type methods are popular in federated learning due to their fast convergence. Still,
they suffer from two main issues, namely: low communication efficiency and low privacy due …
they suffer from two main issues, namely: low communication efficiency and low privacy due …
Minibatch vs local SGD with shuffling: Tight convergence bounds and beyond
In distributed learning, local SGD (also known as federated averaging) and its simple
baseline minibatch SGD are widely studied optimization methods. Most existing analyses of …
baseline minibatch SGD are widely studied optimization methods. Most existing analyses of …
Federated optimization algorithms with random reshuffling and gradient compression
Gradient compression is a popular technique for improving communication complexity of
stochastic first-order methods in distributed training of machine learning models. However …
stochastic first-order methods in distributed training of machine learning models. However …
Communication acceleration of local gradient methods via an accelerated primal-dual algorithm with an inexact prox
Inspired by a recent breakthrough of Mishchenko et al.[2022], who for the first time showed
that local gradient steps can lead to provable communication acceleration, we propose an …
that local gradient steps can lead to provable communication acceleration, we propose an …
Fedshuffle: Recipes for better use of local work in federated learning
The practice of applying several local updates before aggregation across clients has been
empirically shown to be a successful approach to overcoming the communication bottleneck …
empirically shown to be a successful approach to overcoming the communication bottleneck …
Federated learning with regularized client participation
Federated Learning (FL) is a distributed machine learning approach where multiple clients
work together to solve a machine learning task. One of the key challenges in FL is the issue …
work together to solve a machine learning task. One of the key challenges in FL is the issue …