Newton-type methods for non-convex optimization under inexact Hessian information

P Xu, F Roosta, MW Mahoney - Mathematical Programming, 2020 - Springer
We consider variants of trust-region and adaptive cubic regularization methods for non-
convex optimization, in which the Hessian matrix is approximated. Under certain condition …

Sub-sampled Newton methods

F Roosta-Khorasani, MW Mahoney - Mathematical Programming, 2019 - Springer
For large-scale finite-sum minimization problems, we study non-asymptotic and high-
probability global as well as local convergence properties of variants of Newton's method …

Optimization methods for inverse problems

N Ye, F Roosta-Khorasani, T Cui - 2017 MATRIX Annals, 2019 - Springer
Optimization plays an important role in solving many inverse problems. Indeed, the task of
inversion often either involves or is fully cast as a solution of an optimization problem. In this …

Estimation of discrete choice models with hybrid stochastic adaptive batch size algorithms

G Lederrey, V Lurkin, T Hillel, M Bierlaire - Journal of choice modelling, 2021 - Elsevier
Abstract The emergence of Big Data has enabled new research perspectives in the discrete
choice community. While the techniques to estimate Machine Learning models on a massive …

Fast newton hard thresholding pursuit for sparsity constrained nonconvex optimization

J Chen, Q Gu - Proceedings of the 23rd ACM SIGKDD international …, 2017 - dl.acm.org
We propose a fast Newton hard thresholding pursuit algorithm for sparsity constrained
nonconvex optimization. Our proposed algorithm reduces the per-iteration time complexity to …

STO-DARTS: Stochastic Bilevel Optimization for Differentiable Neural Architecture Search

Z Cai, L Chen, T Ling, HL Liu - IEEE Transactions on Emerging …, 2024 - ieeexplore.ieee.org
Differentiable bilevel Neural Architecture Search (NAS) has emerged as a powerful
approach in automated machine learning (AutoML) for efficiently searching for neural …

On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization

A Milzarek, X **ao, Z Wen, M Ulbrich - Science China Mathematics, 2022 - Springer
In this work, we present probabilistic local convergence results for a stochastic semismooth
Newton method for a class of stochastic composite optimization problems involving the sum …

RFN: A random-feature based Newton method for empirical risk minimization in reproducing kernel Hilbert spaces

TJ Chang, S Shahrampour - IEEE Transactions on Signal …, 2022 - ieeexplore.ieee.org
In supervised learning using kernel methods, we often encounter a large-scale finite-sum
minimization over a reproducing kernel Hilbert space (RKHS). Large-scale finite-sum …

[PDF][PDF] HAMABS: Estimation of Discrete Choice Models with Hybrid Stochastic Adaptive Batch Size Algorithms

G Lederrey, V Lurkin, T Hillel, M Bierlaire - TRANSP-OR, 2019 - transp-or.epfl.ch
Abstract The emergence of Big Data opened research to new perspectives for the discrete
choice community. While the Machine Learning (ML) community has been thriving in finding …

[LIVRE][B] Efficient Second-Order Methods for Machine Learning

P Xu - 2018 - search.proquest.com
Due to the large-scale nature of many modern machine learning applications, including but
not limited to deep learning problems, people have been focusing on studying and …