A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications

S Liu, PY Chen, B Kailkhura, G Zhang… - IEEE Signal …, 2020 - ieeexplore.ieee.org
Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many
signal processing and machine learning (ML) applications. It is used for solving optimization …

A survey of stochastic simulation and optimization methods in signal processing

M Pereyra, P Schniter, E Chouzenoux… - IEEE Journal of …, 2015 - ieeexplore.ieee.org
Modern signal processing (SP) methods rely very heavily on probability and statistics to
solve challenging SP problems. SP methods are now expected to deal with ever more …

Autocompress: An automatic dnn structured pruning framework for ultra-high compression rates

N Liu, X Ma, Z Xu, Y Wang, J Tang, J Ye - Proceedings of the AAAI …, 2020 - ojs.aaai.org
Structured weight pruning is a representative model compression technique of DNNs to
reduce the storage and computation requirements and accelerate inference. An automatic …

Admm-nn: An algorithm-hardware co-design framework of dnns using alternating direction methods of multipliers

A Ren, T Zhang, S Ye, J Li, W Xu, X Qian, X Lin… - Proceedings of the …, 2019 - dl.acm.org
Model compression is an important technique to facilitate efficient embedded and hardware
implementations of deep neural networks (DNNs), a number of prior works are dedicated to …

Asynchronous distributed ADMM for consensus optimization

R Zhang, J Kwok - International conference on machine …, 2014 - proceedings.mlr.press
Distributed optimization algorithms are highly attractive for solving big data problems. In
particular, many machine learning problems can be formulated as the global consensus …

Structured adversarial attack: Towards general implementation and better interpretability

K Xu, S Liu, P Zhao, PY Chen, H Zhang, Q Fan… - arxiv preprint arxiv …, 2018 - arxiv.org
When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of
the added perturbation is usually used to measure the similarity between original image and …

Stochastic primal-dual coordinate method for regularized empirical risk minimization

Y Zhang, L **ao - Journal of Machine Learning Research, 2017 - jmlr.org
We consider a generic convex optimization problem associated with regularized empirical
risk minimization of linear predictors. The problem structure allows us to reformulate it as a …

Stochastic alternating direction method of multipliers

H Ouyang, N He, L Tran, A Gray - … conference on machine …, 2013 - proceedings.mlr.press
Abstract The Alternating Direction Method of Multipliers (ADMM) has received lots of
attention recently due to the tremendous demand from large-scale and data-distributed …

Non-structured DNN weight pruning—Is it beneficial in any platform?

X Ma, S Lin, S Ye, Z He, L Zhang… - IEEE transactions on …, 2021 - ieeexplore.ieee.org
Large deep neural network (DNN) models pose the key challenge to energy efficiency due
to the significantly higher energy consumption of off-chip DRAM accesses than arithmetic or …

Scalable plug-and-play ADMM with convergence guarantees

Y Sun, Z Wu, X Xu, B Wohlberg… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Plug-and-play priors (PnP) is a broadly applicable methodology for solving inverse
problems by exploiting statistical priors specified as denoisers. Recent work has reported …