Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Fine-grained analysis of stability and generalization for stochastic gradient descent
Recently there are a considerable amount of work devoted to the study of the algorithmic
stability and generalization for stochastic gradient descent (SGD). However, the existing …
stability and generalization for stochastic gradient descent (SGD). However, the existing …
Stochastic gradient descent for nonconvex learning without bounded gradient assumptions
Stochastic gradient descent (SGD) is a popular and efficient method with wide applications
in training deep neural nets and other nonconvex models. While the behavior of SGD is well …
in training deep neural nets and other nonconvex models. While the behavior of SGD is well …
Online composite optimization between stochastic and adversarial environments
We study online composite optimization under the Stochastically Extended Adversarial
(SEA) model. Specifically, each loss function consists of two parts: a fixed non-smooth and …
(SEA) model. Specifically, each loss function consists of two parts: a fixed non-smooth and …
Stochastic mirror descent: Convergence analysis and adaptive variants via the mirror stochastic polyak stepsize
We investigate the convergence of stochastic mirror descent (SMD) under interpolation in
relatively smooth and smooth convex optimization. In relatively smooth convex optimization …
relatively smooth and smooth convex optimization. In relatively smooth convex optimization …
High probability guarantees for nonconvex stochastic gradient descent with heavy tails
S Li, Y Liu - International Conference on Machine Learning, 2022 - proceedings.mlr.press
Stochastic gradient descent (SGD) is the workhorse in modern machine learning and data-
driven optimization. Despite its popularity, existing theoretical guarantees for SGD are …
driven optimization. Despite its popularity, existing theoretical guarantees for SGD are …
Learning rates for stochastic gradient descent with nonconvex objectives
Stochastic gradient descent (SGD) has become the method of choice for training highly
complex and nonconvex models since it can not only recover good solutions to minimize …
complex and nonconvex models since it can not only recover good solutions to minimize …
Generalization performance of multi-pass stochastic gradient descent with convex loss functions
Stochastic gradient descent (SGD) has become the method of choice to tackle large-scale
datasets due to its low computational cost and good practical performance. Learning rate …
datasets due to its low computational cost and good practical performance. Learning rate …
Policy optimization with stochastic mirror descent
Improving sample efficiency has been a longstanding goal in reinforcement learning. This
paper proposes VRMPO algorithm: a sample efficient policy gradient method with stochastic …
paper proposes VRMPO algorithm: a sample efficient policy gradient method with stochastic …
Game-theoretic distributed empirical risk minimization with strategic network design
This article considers a game-theoretic framework for distributed empirical risk minimization
(ERM) problems over networks where the information acquisition at a node is modeled as a …
(ERM) problems over networks where the information acquisition at a node is modeled as a …
Understanding estimation and generalization error of generative adversarial networks
This article investigates the estimation and generalization errors of the generative
adversarial network (GAN) training. On the statistical side, we develop an upper bound as …
adversarial network (GAN) training. On the statistical side, we develop an upper bound as …