End-to-end constrained optimization learning: A survey
J Kotary, F Fioretto, P Van Hentenryck… - arxiv preprint arxiv …, 2021 - arxiv.org
This paper surveys the recent attempts at leveraging machine learning to solve constrained
optimization problems. It focuses on surveying the work on integrating combinatorial solvers …
optimization problems. It focuses on surveying the work on integrating combinatorial solvers …
Only train once: A one-shot neural network training and pruning framework
Structured pruning is a commonly used technique in deploying deep neural networks
(DNNs) onto resource-constrained devices. However, the existing pruning methods are …
(DNNs) onto resource-constrained devices. However, the existing pruning methods are …
Fixed point strategies in data science
The goal of this article is to promote the use of fixed point strategies in data science by
showing that they provide a simplifying and unifying framework to model, analyze, and solve …
showing that they provide a simplifying and unifying framework to model, analyze, and solve …
Safe screening rules for l0-regression from perspective relaxations
We give safe screening rules to eliminate variables from regression with $\ell_0 $
regularization or cardinality constraint. These rules are based on guarantees that a feature …
regularization or cardinality constraint. These rules are based on guarantees that a feature …
Block coordinate regularization by denoising
We consider the problem of estimating a vector from its noisy measurements using a prior
specified only through a denoising function. Recent work on plug-and-play priors (PnP) and …
specified only through a denoising function. Recent work on plug-and-play priors (PnP) and …
Celer: a fast solver for the lasso with dual extrapolation
Convex sparsity-inducing regularizations are ubiquitous in high-dimensional machine
learning, but solving the resulting optimization problems can be slow. To accelerate solvers …
learning, but solving the resulting optimization problems can be slow. To accelerate solvers …
Learning step sizes for unfolded sparse coding
Sparse coding is typically solved by iterative optimization techniques, such as the Iterative
Shrinkage-Thresholding Algorithm (ISTA). Unfolding and learning weights of ISTA using …
Shrinkage-Thresholding Algorithm (ISTA). Unfolding and learning weights of ISTA using …
[PDF][PDF] Doubly Sparse Asynchronous Learning
Parallel optimization has become popular for largescale learning in the past decades.
However, existing methods suffer from huge computational cost, memory usage, and …
However, existing methods suffer from huge computational cost, memory usage, and …
An accelerated doubly stochastic gradient method with faster explicit model identification
Sparsity regularized loss minimization problems play an important role in various fields
including machine learning, data mining, and modern statistics. Proximal gradient descent …
including machine learning, data mining, and modern statistics. Proximal gradient descent …
Hybrid ISTA: Unfolding ISTA with convergence guarantees using free-form deep neural networks
It is promising to solve linear inverse problems by unfolding iterative algorithms (eg, iterative
shrinkage thresholding algorithm (ISTA)) as deep neural networks (DNNs) with learnable …
shrinkage thresholding algorithm (ISTA)) as deep neural networks (DNNs) with learnable …