Learning to optimize: A primer and a benchmark

T Chen, X Chen, W Chen, H Heaton, J Liu… - Journal of Machine …, 2022 - jmlr.org
Learning to optimize (L2O) is an emerging approach that leverages machine learning to
develop optimization methods, aiming at reducing the laborious iterations of hand …

Velo: Training versatile learned optimizers by scaling up

L Metz, J Harrison, CD Freeman, A Merchant… - arxiv preprint arxiv …, 2022 - arxiv.org
While deep learning models have replaced hand-designed features across many domains,
these models are still trained with hand-designed optimizers. In this work, we leverage the …

Provably optimal memory capacity for modern hopfield models: Transformer-compatible dense associative memories as spherical codes

JYC Hu, D Wu, H Liu - arxiv preprint arxiv:2410.23126, 2024 - arxiv.org
We study the optimal memorization capacity of modern Hopfield models and Kernelized
Hopfield Models (KHMs), a transformer-compatible class of Dense Associative Memories …

A closer look at learned optimization: Stability, robustness, and inductive biases

J Harrison, L Metz… - Advances in Neural …, 2022 - proceedings.neurips.cc
Learned optimizers---neural networks that are trained to act as optimizers---have the
potential to dramatically accelerate training of machine learning models. However, even …

Learned robust PCA: A scalable deep unfolding approach for high-dimensional outlier detection

HQ Cai, J Liu, W Yin - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Robust principal component analysis (RPCA) is a critical tool in modern machine learning,
which detects outliers in the task of low-rank matrix reconstruction. In this paper, we propose …

Neur2SP: Neural two-stage stochastic programming

RM Patel, J Dumouchelle, E Khalil… - Advances in neural …, 2022 - proceedings.neurips.cc
Stochastic Programming is a powerful modeling framework for decision-making under
uncertainty. In this work, we tackle two-stage stochastic programs (2SPs), the most widely …

Learning to generalize provably in learning to optimize

J Yang, T Chen, M Zhu, F He, D Tao… - International …, 2023 - proceedings.mlr.press
Learning to optimize (L2O) has gained increasing popularity, which automates the design of
optimizers by data-driven approaches. However, current L2O methods often suffer from poor …

Scalable learning to optimize: A learned optimizer can train big models

X Chen, T Chen, Y Cheng, W Chen… - … on Computer Vision, 2022 - Springer
Learning to optimize (L2O) has gained increasing attention since it demonstrates a
promising path to automating and accelerating the optimization of complicated problems …

A Mathematics-Inspired Learning-to-Optimize Framework for Decentralized Optimization

Y He, Q Shang, X Huang, J Liu, K Yuan - arxiv preprint arxiv:2410.01700, 2024 - arxiv.org
Most decentralized optimization algorithms are handcrafted. While endowed with strong
theoretical guarantees, these algorithms generally target a broad class of problems, thereby …

Learning from Offline and Online Experiences: A Hybrid Adaptive Operator Selection Framework

J Pei, J Liu, Y Mei - Proceedings of the Genetic and Evolutionary …, 2024 - dl.acm.org
In many practical applications, usually, similar optimisation problems or scenarios
repeatedly appear. Learning from previous problem-solving experiences can help adjust …