Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Newton-type methods for non-convex optimization under inexact Hessian information
We consider variants of trust-region and adaptive cubic regularization methods for non-
convex optimization, in which the Hessian matrix is approximated. Under certain condition …
convex optimization, in which the Hessian matrix is approximated. Under certain condition …
Sub-sampled Newton methods
For large-scale finite-sum minimization problems, we study non-asymptotic and high-
probability global as well as local convergence properties of variants of Newton's method …
probability global as well as local convergence properties of variants of Newton's method …
Optimization methods for inverse problems
Optimization plays an important role in solving many inverse problems. Indeed, the task of
inversion often either involves or is fully cast as a solution of an optimization problem. In this …
inversion often either involves or is fully cast as a solution of an optimization problem. In this …
Estimation of discrete choice models with hybrid stochastic adaptive batch size algorithms
Abstract The emergence of Big Data has enabled new research perspectives in the discrete
choice community. While the techniques to estimate Machine Learning models on a massive …
choice community. While the techniques to estimate Machine Learning models on a massive …
Fast newton hard thresholding pursuit for sparsity constrained nonconvex optimization
We propose a fast Newton hard thresholding pursuit algorithm for sparsity constrained
nonconvex optimization. Our proposed algorithm reduces the per-iteration time complexity to …
nonconvex optimization. Our proposed algorithm reduces the per-iteration time complexity to …
STO-DARTS: Stochastic Bilevel Optimization for Differentiable Neural Architecture Search
Differentiable bilevel Neural Architecture Search (NAS) has emerged as a powerful
approach in automated machine learning (AutoML) for efficiently searching for neural …
approach in automated machine learning (AutoML) for efficiently searching for neural …
On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
In this work, we present probabilistic local convergence results for a stochastic semismooth
Newton method for a class of stochastic composite optimization problems involving the sum …
Newton method for a class of stochastic composite optimization problems involving the sum …
RFN: A random-feature based Newton method for empirical risk minimization in reproducing kernel Hilbert spaces
In supervised learning using kernel methods, we often encounter a large-scale finite-sum
minimization over a reproducing kernel Hilbert space (RKHS). Large-scale finite-sum …
minimization over a reproducing kernel Hilbert space (RKHS). Large-scale finite-sum …
[PDF][PDF] HAMABS: Estimation of Discrete Choice Models with Hybrid Stochastic Adaptive Batch Size Algorithms
Abstract The emergence of Big Data opened research to new perspectives for the discrete
choice community. While the Machine Learning (ML) community has been thriving in finding …
choice community. While the Machine Learning (ML) community has been thriving in finding …
[LIVRE][B] Efficient Second-Order Methods for Machine Learning
P Xu - 2018 - search.proquest.com
Due to the large-scale nature of many modern machine learning applications, including but
not limited to deep learning problems, people have been focusing on studying and …
not limited to deep learning problems, people have been focusing on studying and …