Stebėti
Aaron Mishkin
Aaron Mishkin
PhD Student, Stanford University
Patvirtintas el. paštas cs.stanford.edu - Pagrindinis puslapis
Pavadinimas
Cituota
Cituota
Metai
Painless stochastic gradient: Interpolation, line-search, and convergence rates
S Vaswani, A Mishkin, I Laradji, M Schmidt, G Gidel, S Lacoste-Julien
Advances in neural information processing systems 32, 2019
2502019
Slang: Fast structured covariance approximations for bayesian deep learning with natural gradient
A Mishkin, F Kunstner, D Nielsen, M Schmidt, ME Khan
Advances in neural information processing systems 31, 2018
742018
Fast convex optimization for two-layer relu networks: Equivalent model classes and cone decompositions
A Mishkin, A Sahiner, M Pilanci
International Conference on Machine Learning, 15770-15816, 2022
332022
Interpolation, Growth Conditions, and Stochastic Gradient Descent
A Mishkin
University of British Columbia, 2020
92020
Optimal sets and solution paths of relu networks
A Mishkin, M Pilanci
International Conference on Machine Learning, 24888-24924, 2023
82023
To each optimizer a norm, to each norm its generalization
S Vaswani, R Babanezhad, J Gallego-Posada, A Mishkin, ...
arXiv preprint arXiv:2006.06821, 2020
72020
Directional smoothness and gradient methods: Convergence and adaptivity
A Mishkin, A Khaled, Y Wang, A Defazio, R Gower
Advances in Neural Information Processing Systems 37, 14810-14848, 2025
52025
A library of mirrors: Deep neural nets in low dimensions are convex lasso models with reflection features
E Zeger, Y Wang, A Mishkin, T Ergen, E Candès, M Pilanci
arXiv preprint arXiv:2403.01046, 2024
32024
Web ValueCharts
AP Mishkin, EA Hindalong
32018
Faster convergence of stochastic accelerated gradient descent under interpolation
A Mishkin, M Pilanci, M Schmidt
arXiv preprint arXiv:2404.02378, 2024
22024
Exploring the loss landscape of regularized neural networks via convex duality
S Kim, A Mishkin, M Pilanci
arXiv preprint arXiv:2411.07729, 2024
12024
Level Set Teleportation: An Optimization Perspective
A Mishkin, A Bietti, RM Gower
arXiv preprint arXiv:2403.03362, 2024
12024
Analyzing and Improving Greedy 2-Coordinate Updates for Equality-Constrained Optimization via Steepest Descent in the 1-Norm
AV Ramesh, A Mishkin, M Schmidt, Y Zhou, JW Lavington, J She
arXiv preprint arXiv:2307.01169, 2023
12023
The Solution Path of the Group Lasso
A Mishkin, M Pilanci
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022
12022
Web ValueCharts: Analyzing Individual and Group Preferences with Interactive, Web-based Visualizations
A Mishkin
2017
A novel analysis of gradient descent under directional smoothness
A Mishkin, A Khaled, A Defazio, RM Gower
OPT 2023: Optimization for Machine Learning, 0
Level Set Teleportation: the Good, the Bad, and the Ugly
A Mishkin, A Bietti, RM Gower
OPT 2023: Optimization for Machine Learning, 0
Strong Duality via Convex Conjugacy
A Mishkin
Solving Projection Problems using Lagrangian Duality
A Mishkin
Fast Convergence of Greedy 2-Coordinate Updates for Optimizing with an Equality Constraint
AV Ramesh, A Mishkin, M Schmidt
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 0
Sistema negali atlikti operacijos. Bandykite vėliau dar kartą.
Straipsniai 1–20