Small data challenges for intelligent prognostics and health management: a review

C Li, S Li, Y Feng, K Gryllias, F Gu, M Pecht - Artificial Intelligence Review, 2024 - Springer
Prognostics and health management (PHM) is critical for enhancing equipment reliability
and reducing maintenance costs, and research on intelligent PHM has made significant …

Convergence analysis of sequential federated learning on heterogeneous data

Y Li, X Lyu - Advances in Neural Information Processing …, 2024 - proceedings.neurips.cc
There are two categories of methods in Federated Learning (FL) for joint training across
multiple clients: i) parallel FL (PFL), where clients train models in a parallel manner; and ii) …

Federated learning with client subsampling, data heterogeneity, and unbounded smoothness: A new algorithm and lower bounds

M Crawshaw, Y Bao, M Liu - Advances in Neural …, 2023 - proceedings.neurips.cc
We study the problem of Federated Learning (FL) under client subsampling and data
heterogeneity with an objective function that has potentially unbounded smoothness. This …

FedNL: Making Newton-type methods applicable to federated learning

M Safaryan, R Islamov, X Qian, P Richtárik - arxiv preprint arxiv …, 2021 - arxiv.org
Inspired by recent work of Islamov et al (2021), we propose a family of Federated Newton
Learn (FedNL) methods, which we believe is a marked step in the direction of making …

FedNew: A communication-efficient and privacy-preserving Newton-type method for federated learning

A Elgabli, CB Issaid, AS Bedi… - International …, 2022 - proceedings.mlr.press
Newton-type methods are popular in federated learning due to their fast convergence. Still,
they suffer from two main issues, namely: low communication efficiency and low privacy due …

Minibatch vs local SGD with shuffling: Tight convergence bounds and beyond

C Yun, S Rajput, S Sra - arxiv preprint arxiv:2110.10342, 2021 - arxiv.org
In distributed learning, local SGD (also known as federated averaging) and its simple
baseline minibatch SGD are widely studied optimization methods. Most existing analyses of …

Federated optimization algorithms with random reshuffling and gradient compression

A Sadiev, G Malinovsky, E Gorbunov, I Sokolov… - arxiv preprint arxiv …, 2022 - arxiv.org
Gradient compression is a popular technique for improving communication complexity of
stochastic first-order methods in distributed training of machine learning models. However …

Communication acceleration of local gradient methods via an accelerated primal-dual algorithm with an inexact prox

A Sadiev, D Kovalev… - Advances in Neural …, 2022 - proceedings.neurips.cc
Inspired by a recent breakthrough of Mishchenko et al.[2022], who for the first time showed
that local gradient steps can lead to provable communication acceleration, we propose an …

Fedshuffle: Recipes for better use of local work in federated learning

S Horváth, M Sanjabi, L **ao, P Richtárik… - arxiv preprint arxiv …, 2022 - arxiv.org
The practice of applying several local updates before aggregation across clients has been
empirically shown to be a successful approach to overcoming the communication bottleneck …

Federated learning with regularized client participation

G Malinovsky, S Horváth, K Burlachenko… - arxiv preprint arxiv …, 2023 - arxiv.org
Federated Learning (FL) is a distributed machine learning approach where multiple clients
work together to solve a machine learning task. One of the key challenges in FL is the issue …