Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Tighter theory for local SGD on identical and heterogeneous data
A Khaled, K Mishchenko… - … conference on artificial …, 2020 - proceedings.mlr.press
We provide a new analysis of local SGD, removing unnecessary assumptions and
elaborating on the difference between two data regimes: identical and heterogeneous. In …
elaborating on the difference between two data regimes: identical and heterogeneous. In …
Fedpd: A federated learning framework with adaptivity to non-iid data
Federated Learning (FL) is popular for communication-efficient learning from distributed
data. To utilize data at different clients without moving them to the cloud, algorithms such as …
data. To utilize data at different clients without moving them to the cloud, algorithms such as …
MARINA: Faster non-convex distributed learning with compression
We develop and analyze MARINA: a new communication efficient method for non-convex
distributed learning over heterogeneous datasets. MARINA employs a novel communication …
distributed learning over heterogeneous datasets. MARINA employs a novel communication …
Federated learning under arbitrary communication patterns
D Avdiukhin… - … Conference on Machine …, 2021 - proceedings.mlr.press
Federated Learning is a distributed learning setting where the goal is to train a centralized
model with training data distributed over a large number of heterogeneous clients, each with …
model with training data distributed over a large number of heterogeneous clients, each with …
Stem: A stochastic two-sided momentum algorithm achieving near-optimal sample and communication complexities for federated learning
Federated Learning (FL) refers to the paradigm where multiple worker nodes (WNs) build a
joint model by using local data. Despite extensive research, for a generic non-convex FL …
joint model by using local data. Despite extensive research, for a generic non-convex FL …
Fedcluster: Boosting the convergence of federated learning via cluster-cycling
We develop FedCluster-a novel federated learning framework with improved optimization
efficiency, and investigate its theoretical convergence properties. The FedCluster groups the …
efficiency, and investigate its theoretical convergence properties. The FedCluster groups the …
Bias-variance reduced local SGD for less heterogeneous federated learning
Recently, local SGD has got much attention and been extensively studied in the distributed
learning community to overcome the communication bottleneck problem. However, the …
learning community to overcome the communication bottleneck problem. However, the …
DASHA: Distributed nonconvex optimization with communication compression, optimal oracle complexity, and no client synchronization
A Tyurin, P Richtárik - arxiv preprint arxiv:2202.01268, 2022 - arxiv.org
We develop and analyze DASHA: a new family of methods for nonconvex distributed
optimization problems. When the local functions at the nodes have a finite-sum or an …
optimization problems. When the local functions at the nodes have a finite-sum or an …
Local methods with adaptivity via scaling
S Chezhegov, S Skorik, N Khachaturov… - arxiv preprint arxiv …, 2024 - arxiv.org
The rapid development of machine learning and deep learning has introduced increasingly
complex optimization challenges that must be addressed. Indeed, training modern …
complex optimization challenges that must be addressed. Indeed, training modern …
Quantized FedPD (QFedPD): Beyond Conventional Wisdom–The Energy Benefits of Frequent Communication
Federated averaging (FedAvg) is a well-recognized framework for distributed learning that
efficiently manages communication. Several algorithms have emerged to enhance the …
efficiently manages communication. Several algorithms have emerged to enhance the …