Следене
Albert S. Berahas
Albert S. Berahas
Assistant Professor, University of Michigan
Потвърден имейл адрес: umich.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
A theoretical and empirical comparison of gradient approximations in derivative-free optimization
AS Berahas, L Cao, K Choromanski, K Scheinberg
Foundations of Computational Mathematics 22 (2), 507-560, 2022
2012022
A multi-batch L-BFGS method for machine learning
AS Berahas, J Nocedal, M Takác
Advances in Neural Information Processing Systems 29, 2016
1852016
Balancing communication and computation in distributed optimization
AS Berahas, R Bollapragada, NS Keskar, E Wei
IEEE Transactions on Automatic Control 64 (8), 3141-3155, 2018
1342018
An investigation of Newton-sketch and subsampled Newton methods
AS Berahas, R Bollapragada, J Nocedal
Optimization Methods and Software 35 (4), 661-680, 2020
1332020
Derivative-free optimization of noisy functions via quasi-Newton methods
AS Berahas, RH Byrd, J Nocedal
SIAM Journal on Optimization 29 (2), 965-993, 2019
1192019
Global convergence rate analysis of a generic line search algorithm with noise
AS Berahas, L Cao, K Scheinberg
SIAM Journal on Optimization 31 (2), 1489-1518, 2021
902021
Sequential quadratic optimization for nonlinear equality constrained stochastic optimization
AS Berahas, FE Curtis, D Robinson, B Zhou
SIAM Journal on Optimization 31 (2), 1352-1379, 2021
732021
Quasi-newton methods for deep learning: Forget the past, just sample
AS Berahas, M Jahani, P Richtárik, M Takáč
arXiv, 2020
582020
Quasi-Newton methods for machine learning: forget the past, just sample
AS Berahas, M Jahani, P Richtárik, M Takáč
Optimization Methods and Software 37 (5), 1668-1704, 2022
492022
adaQN: An adaptive quasi-Newton algorithm for training RNNs
NS Keskar, AS Berahas
Machine Learning and Knowledge Discovery in Databases: European Conference …, 2016
492016
A stochastic sequential quadratic optimization algorithm for nonlinear-equality-constrained optimization with rank-deficient Jacobians
AS Berahas, FE Curtis, MJ O’Neill, DP Robinson
Mathematics of Operations Research 49 (4), 2212-2248, 2024
392024
First-and second-order high probability complexity bounds for trust-region methods with noisy oracles
L Cao, AS Berahas, K Scheinberg
Mathematical Programming 207 (1), 55-106, 2024
322024
Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction
AS Berahas, J Shi, Z Yi, B Zhou
Computational Optimization and Applications 86 (1), 79-116, 2023
282023
Linear interpolation gives better gradients than gaussian smoothing in derivative-free optimization
AS Berahas, L Cao, K Choromanski, K Scheinberg
arXiv preprint arXiv:1905.13043, 2019
222019
Modeling and Predicting Heavy-Duty Vehicle Engine-Out and Tailpipe Nitrogen Oxide (NO x ) Emissions Using Deep Learning
R Pillai, V Triantopoulos, AS Berahas, M Brusstar, R Sun, T Nevius, ...
Frontiers in Mechanical Engineering 8, 840310, 2022
212022
Scaling up quasi-newton algorithms: Communication efficient distributed sr1
M Jahani, M Nazari, S Rusakov, AS Berahas, M Takáč
Machine Learning, Optimization, and Data Science: 6th International …, 2020
212020
On the convergence of nested decentralized gradient methods with multiple consensus and gradient steps
AS Berahas, R Bollapragada, E Wei
IEEE Transactions on Signal Processing 69, 4192-4203, 2021
192021
Gradient Descent in the Absence of Global Lipschitz Continuity of the Gradients
V Patel, AS Berahas
SIAM Journal on Mathematics of Data Science 6 (3), 602-626, 2024
182024
Nested distributed gradient methods with adaptive quantized communication
AS Berahas, C Iakovidou, E Wei
2019 IEEE 58th conference on decision and control (CDC), 1519-1525, 2019
162019
A Sequential Quadratic Programming Method With High-Probability Complexity Bounds for Nonlinear Equality-Constrained Stochastic Optimization
AS Berahas, M Xie, B Zhou
SIAM Journal on Optimization 35 (1), 240-269, 2025
152025
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20