Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
The limits and potentials of local sgd for distributed heterogeneous learning with intermittent communication
Local SGD is a popular optimization method in distributed learning, often outperforming mini-
batch SGD. Despite this practical success, proving the efficiency of local SGD has been …
batch SGD. Despite this practical success, proving the efficiency of local SGD has been …
Federated learning under periodic client participation and heterogeneous data: A new communication-efficient algorithm and analysis
In federated learning, it is common to assume that clients are always available to participate
in training, which may not be feasible with user devices in practice. Recent works analyze …
in training, which may not be feasible with user devices in practice. Recent works analyze …
Federated online and bandit convex optimization
We study the problems of distributed online and bandit convex optimization against an
adaptive adversary. We aim to minimize the average regret on $ M $ machines working in …
adaptive adversary. We aim to minimize the average regret on $ M $ machines working in …
Delta: Diverse client sampling for fasting federated learning
Partial client participation has been widely adopted in Federated Learning (FL) to reduce the
communication burden efficiently. However, an inadequate client sampling scheme can lead …
communication burden efficiently. However, an inadequate client sampling scheme can lead …
A lightweight method for tackling unknown participation statistics in federated averaging
Spam: Stochastic proximal point method with momentum variance reduction for non-convex cross-device federated learning
Cross-device training is a crucial subfield of federated learning, where the number of clients
can reach into the billions. Standard approaches and local methods are prone to issues …
can reach into the billions. Standard approaches and local methods are prone to issues …
Fedbcgd: Communication-efficient accelerated block coordinate gradient descent for federated learning
Although Federated Learning has been widely studied in recent years, there are still high
overhead expenses in each communication round for large-scale models such as Vision …
overhead expenses in each communication round for large-scale models such as Vision …
On the still unreasonable effectiveness of federated averaging for heterogeneous distributed learning
Federated Averaging/local SGD is the most common optimization method for federated
learning that has proven effective in many real-world applications, dominating simple …
learning that has proven effective in many real-world applications, dominating simple …