Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey of stochastic simulation and optimization methods in signal processing
Modern signal processing (SP) methods rely very heavily on probability and statistics to
solve challenging SP problems. SP methods are now expected to deal with ever more …
solve challenging SP problems. SP methods are now expected to deal with ever more …
Optimization methods for large-scale machine learning
This paper provides a review and commentary on the past, present, and future of numerical
optimization algorithms in the context of machine learning applications. Through case …
optimization algorithms in the context of machine learning applications. Through case …
Convergence of stochastic proximal gradient algorithm
We study the extension of the proximal gradient algorithm where only a stochastic gradient
estimate is available and a relaxation step is allowed. We establish convergence rates for …
estimate is available and a relaxation step is allowed. We establish convergence rates for …
On acceleration with noise-corrupted gradients
M Cohen, J Diakonikolas… - … Conference on Machine …, 2018 - proceedings.mlr.press
Accelerated algorithms have broad applications in large-scale optimization, due to their
generality and fast convergence. However, their stability in the practical setting of noise …
generality and fast convergence. However, their stability in the practical setting of noise …
Stability of over-relaxations for the forward-backward algorithm, application to FISTA
This paper is concerned with the convergence of over-relaxations of the forward-backward
algorithm (FB)(in particular the fast iterative soft thresholding algorithm (FISTA)) in the case …
algorithm (FB)(in particular the fast iterative soft thresholding algorithm (FISTA)) in the case …
Ergodic convergence of a stochastic proximal point algorithm
P Bianchi - SIAM Journal on Optimization, 2016 - SIAM
The purpose of this paper is to establish the almost sure weak ergodic convergence of a
sequence of iterates (x_n) given by x_n+1=(I+\lambda_nA(n+1,\,.\,))^-1(x_n), where …
sequence of iterates (x_n) given by x_n+1=(I+\lambda_nA(n+1,\,.\,))^-1(x_n), where …
Policy gradients for CVaR-constrained MDPs
LA Prashanth - International Conference on Algorithmic Learning …, 2014 - Springer
We study a risk-constrained version of the stochastic shortest path (SSP) problem, where the
risk measure considered is Conditional Value-at-Risk (CVaR). We propose two algorithms …
risk measure considered is Conditional Value-at-Risk (CVaR). We propose two algorithms …
Consistent online gaussian process regression without the sample complexity bottleneck
Gaussian processes provide a framework for nonlinear nonparametric Bayesian inference
widely applicable across science and engineering. Unfortunately, their computational …
widely applicable across science and engineering. Unfortunately, their computational …
A stochastic majorize-minimize subspace algorithm for online penalized least squares estimation
Stochastic approximation techniques play an important role in solving many problems
encountered in machine learning or adaptive signal processing. In these contexts, the …
encountered in machine learning or adaptive signal processing. In these contexts, the …
Stochastic forward–backward splitting for monotone inclusions
We propose and analyze the convergence of a novel stochastic algorithm for monotone
inclusions that are sum of a maximal monotone operator and a single-valued cocoercive …
inclusions that are sum of a maximal monotone operator and a single-valued cocoercive …