Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Fedvarp: Tackling the variance due to partial client participation in federated learning
Data-heterogeneous federated learning (FL) systems suffer from two significant sources of
convergence error: 1) client drift error caused by performing multiple local optimization steps …
convergence error: 1) client drift error caused by performing multiple local optimization steps …
EF21 with bells & whistles: Practical algorithmic extensions of modern error feedback
I Fatkhullin, I Sokolov, E Gorbunov, Z Li… - ar** for non-convex optimization
A Reisizadeh, H Li, S Das, A Jadbabaie - ar** is a standard training technique used in deep learning applications such
as large-scale language modeling to mitigate exploding gradients. Recent experimental …
as large-scale language modeling to mitigate exploding gradients. Recent experimental …
Decentralized stochastic gradient descent ascent for finite-sum minimax problems
H Gao - arxiv preprint arxiv:2212.02724, 2022 - arxiv.org
Minimax optimization problems have attracted significant attention in recent years due to
their widespread application in numerous machine learning models. To solve the minimax …
their widespread application in numerous machine learning models. To solve the minimax …
CANITA: Faster rates for distributed convex optimization with communication compression
Due to the high communication cost in distributed and federated learning, methods relying
on compressed communication are becoming increasingly popular. Besides, the best …
on compressed communication are becoming increasingly popular. Besides, the best …
DASHA: Distributed nonconvex optimization with communication compression, optimal oracle complexity, and no client synchronization
We develop and analyze DASHA: a new family of methods for nonconvex distributed
optimization problems. When the local functions at the nodes have a finite-sum or an …
optimization problems. When the local functions at the nodes have a finite-sum or an …
FedPAGE: A fast local stochastic gradient method for communication-efficient federated learning
Federated Averaging (FedAvg, also known as Local-SGD)(McMahan et al., 2017) is a
classical federated learning algorithm in which clients run multiple local SGD steps before …
classical federated learning algorithm in which clients run multiple local SGD steps before …
Simple and optimal stochastic gradient methods for nonsmooth nonconvex optimization
We propose and analyze several stochastic gradient algorithms for finding stationary points
or local minimum in nonconvex, possibly with nonsmooth regularizer, finite-sum and online …
or local minimum in nonconvex, possibly with nonsmooth regularizer, finite-sum and online …
Jointly improving the sample and communication complexities in decentralized stochastic minimax optimization
We propose a novel single-loop decentralized algorithm, DGDA-VR, for solving the
stochastic nonconvex strongly-concave minimax problems over a connected network of …
stochastic nonconvex strongly-concave minimax problems over a connected network of …
DESTRESS: Computation-optimal and communication-efficient decentralized nonconvex finite-sum optimization
Emerging applications in multiagent environments such as internet-of-things, networked
sensing, autonomous systems, and federated learning, call for decentralized algorithms for …
sensing, autonomous systems, and federated learning, call for decentralized algorithms for …