Learning to optimize: A primer and a benchmark
Learning to optimize (L2O) is an emerging approach that leverages machine learning to
develop optimization methods, aiming at reducing the laborious iterations of hand …
develop optimization methods, aiming at reducing the laborious iterations of hand …
Velo: Training versatile learned optimizers by scaling up
While deep learning models have replaced hand-designed features across many domains,
these models are still trained with hand-designed optimizers. In this work, we leverage the …
these models are still trained with hand-designed optimizers. In this work, we leverage the …
Provably optimal memory capacity for modern hopfield models: Transformer-compatible dense associative memories as spherical codes
We study the optimal memorization capacity of modern Hopfield models and Kernelized
Hopfield Models (KHMs), a transformer-compatible class of Dense Associative Memories …
Hopfield Models (KHMs), a transformer-compatible class of Dense Associative Memories …
A closer look at learned optimization: Stability, robustness, and inductive biases
Learned optimizers---neural networks that are trained to act as optimizers---have the
potential to dramatically accelerate training of machine learning models. However, even …
potential to dramatically accelerate training of machine learning models. However, even …
Learned robust PCA: A scalable deep unfolding approach for high-dimensional outlier detection
Robust principal component analysis (RPCA) is a critical tool in modern machine learning,
which detects outliers in the task of low-rank matrix reconstruction. In this paper, we propose …
which detects outliers in the task of low-rank matrix reconstruction. In this paper, we propose …
Neur2SP: Neural two-stage stochastic programming
Stochastic Programming is a powerful modeling framework for decision-making under
uncertainty. In this work, we tackle two-stage stochastic programs (2SPs), the most widely …
uncertainty. In this work, we tackle two-stage stochastic programs (2SPs), the most widely …
Learning to generalize provably in learning to optimize
Learning to optimize (L2O) has gained increasing popularity, which automates the design of
optimizers by data-driven approaches. However, current L2O methods often suffer from poor …
optimizers by data-driven approaches. However, current L2O methods often suffer from poor …
Scalable learning to optimize: A learned optimizer can train big models
Learning to optimize (L2O) has gained increasing attention since it demonstrates a
promising path to automating and accelerating the optimization of complicated problems …
promising path to automating and accelerating the optimization of complicated problems …
A Mathematics-Inspired Learning-to-Optimize Framework for Decentralized Optimization
Most decentralized optimization algorithms are handcrafted. While endowed with strong
theoretical guarantees, these algorithms generally target a broad class of problems, thereby …
theoretical guarantees, these algorithms generally target a broad class of problems, thereby …
Learning from Offline and Online Experiences: A Hybrid Adaptive Operator Selection Framework
In many practical applications, usually, similar optimisation problems or scenarios
repeatedly appear. Learning from previous problem-solving experiences can help adjust …
repeatedly appear. Learning from previous problem-solving experiences can help adjust …