Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Queuing dynamics of asynchronous Federated Learning
We study asynchronous federated learning mechanisms with nodes having potentially
different computational speeds. In such an environment, each node is allowed to work on …
different computational speeds. In such an environment, each node is allowed to work on …
Decentralized optimization over slowly time-varying graphs: Algorithms and lower bounds
We consider a decentralized convex unconstrained optimization problem, where the cost
function can be decomposed into a sum of strongly convex and smooth functions …
function can be decomposed into a sum of strongly convex and smooth functions …
Order-optimal global convergence for average reward reinforcement learning via actor-critic approach
This work analyzes average-reward reinforcement learning with general parametrization.
Current state-of-the-art (SOTA) guarantees for this problem are either suboptimal or demand …
Current state-of-the-art (SOTA) guarantees for this problem are either suboptimal or demand …
Non-asymptotic analysis of biased adaptive stochastic approximation
S Surendran, A Fermanian… - Advances in …, 2025 - proceedings.neurips.cc
Abstract Stochastic Gradient Descent (SGD) with adaptive steps is widely used to train deep
neural networks and generative models. Most theoretical results assume that it is possible to …
neural networks and generative models. Most theoretical results assume that it is possible to …
Dynamic byzantine-robust learning: Adapting to switching byzantine workers
Byzantine-robust learning has emerged as a prominent fault-tolerant distributed machine
learning framework. However, most techniques focus on the static setting, wherein the …
learning framework. However, most techniques focus on the static setting, wherein the …
On some works of Boris Teodorovich Polyak on the convergence of gradient methods and their development
The paper presents a review of the current state of subgradient and accelerated convex
optimization methods, including the cases with the presence of noise and access to various …
optimization methods, including the cases with the presence of noise and access to various …
Methods for Optimization Problems with Markovian Stochasticity and Non-Euclidean Geometry
This paper examines a variety of classical optimization problems, including well-known
minimization tasks and more general variational inequalities. We consider a stochastic …
minimization tasks and more general variational inequalities. We consider a stochastic …
Debiasing Federated Learning with Correlated Client Participation
In cross-device federated learning (FL) with millions of mobile clients, only a small subset of
clients participate in training in every communication round, and Federated Averaging …
clients participate in training in every communication round, and Federated Averaging …
Effective Method with Compression for Distributed and Federated Cocoercive Variational Inequalities
Variational inequalities as an effective tool for solving applied problems, including machine
learning tasks, have been attracting more and more attention from researchers in recent …
learning tasks, have been attracting more and more attention from researchers in recent …
Methods for Solving Variational Inequalities with Markovian Stochasticity
V Solodkin, M Ermoshin, R Gavrilenko… - arxiv preprint arxiv …, 2024 - arxiv.org
In this paper, we present a novel stochastic method for solving variational inequalities (VI) in
the context of Markovian noise. By leveraging Extragradient technique, we can productively …
the context of Markovian noise. By leveraging Extragradient technique, we can productively …