A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications
Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many
signal processing and machine learning (ML) applications. It is used for solving optimization …
signal processing and machine learning (ML) applications. It is used for solving optimization …
A survey of stochastic simulation and optimization methods in signal processing
Modern signal processing (SP) methods rely very heavily on probability and statistics to
solve challenging SP problems. SP methods are now expected to deal with ever more …
solve challenging SP problems. SP methods are now expected to deal with ever more …
Autocompress: An automatic dnn structured pruning framework for ultra-high compression rates
Structured weight pruning is a representative model compression technique of DNNs to
reduce the storage and computation requirements and accelerate inference. An automatic …
reduce the storage and computation requirements and accelerate inference. An automatic …
Admm-nn: An algorithm-hardware co-design framework of dnns using alternating direction methods of multipliers
Model compression is an important technique to facilitate efficient embedded and hardware
implementations of deep neural networks (DNNs), a number of prior works are dedicated to …
implementations of deep neural networks (DNNs), a number of prior works are dedicated to …
Asynchronous distributed ADMM for consensus optimization
R Zhang, J Kwok - International conference on machine …, 2014 - proceedings.mlr.press
Distributed optimization algorithms are highly attractive for solving big data problems. In
particular, many machine learning problems can be formulated as the global consensus …
particular, many machine learning problems can be formulated as the global consensus …
Structured adversarial attack: Towards general implementation and better interpretability
When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of
the added perturbation is usually used to measure the similarity between original image and …
the added perturbation is usually used to measure the similarity between original image and …
Stochastic primal-dual coordinate method for regularized empirical risk minimization
We consider a generic convex optimization problem associated with regularized empirical
risk minimization of linear predictors. The problem structure allows us to reformulate it as a …
risk minimization of linear predictors. The problem structure allows us to reformulate it as a …
Stochastic alternating direction method of multipliers
Abstract The Alternating Direction Method of Multipliers (ADMM) has received lots of
attention recently due to the tremendous demand from large-scale and data-distributed …
attention recently due to the tremendous demand from large-scale and data-distributed …
Non-structured DNN weight pruning—Is it beneficial in any platform?
Large deep neural network (DNN) models pose the key challenge to energy efficiency due
to the significantly higher energy consumption of off-chip DRAM accesses than arithmetic or …
to the significantly higher energy consumption of off-chip DRAM accesses than arithmetic or …
Scalable plug-and-play ADMM with convergence guarantees
Plug-and-play priors (PnP) is a broadly applicable methodology for solving inverse
problems by exploiting statistical priors specified as denoisers. Recent work has reported …
problems by exploiting statistical priors specified as denoisers. Recent work has reported …