Accelerated zeroth-order and first-order momentum methods from mini to minimax optimization

F Huang, S Gao, J Pei, H Huang - Journal of Machine Learning Research, 2022 - jmlr.org
In the paper, we propose a class of accelerated zeroth-order and first-order momentum
methods for both nonconvex mini-optimization and minimax-optimization. Specifically, we …

Zeroth-order algorithms for stochastic distributed nonconvex optimization

X Yi, S Zhang, T Yang, KH Johansson - Automatica, 2022 - Elsevier
In this paper, we consider a stochastic distributed nonconvex optimization problem with the
cost function being distributed over n agents having access only to zeroth-order (ZO) …

Curvilinear distance metric learning

S Chen, L Luo, J Yang, C Gong, J Li… - Advances in Neural …, 2019 - proceedings.neurips.cc
Abstract Distance Metric Learning aims to learn an appropriate metric that faithfully
measures the distance between two data points. Traditional metric learning methods usually …

Accelerated variance reduction stochastic ADMM for large-scale machine learning

Y Liu, F Shang, H Liu, L Kong, L Jiao… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Recently, many stochastic variance reduced alternating direction methods of multipliers
(ADMMs)(eg, SAG-ADMM and SVRG-ADMM) have made exciting progress such as linear …

Accelerated stochastic gradient-free and projection-free methods

F Huang, L Tao, S Chen - International conference on …, 2020 - proceedings.mlr.press
In the paper, we propose a class of accelerated stochastic gradient-free and projection-free
(aka, zeroth-order Frank-Wolfe) methods to solve the constrained stochastic and finite-sum …

Subspace selection based prompt tuning with nonconvex nonsmooth black-box optimization

H Zhang, H Zhang, B Gu, Y Chang - Proceedings of the 30th ACM …, 2024 - dl.acm.org
In this paper, we introduce a novel framework for black-box prompt tuning with a subspace
learning and selection strategy, leveraging derivative-free optimization algorithms. This …

Distributed Proximal Gradient Algorithm for Nonconvex Optimization Over Time-Varying Networks

X Jiang, X Zeng, J Sun, J Chen - IEEE Transactions on Control …, 2022 - ieeexplore.ieee.org
This article studies the distributed nonconvex optimization problem with nonsmooth
regularization, which has wide applications in decentralized learning, estimation, and …

Efficient zeroth-order proximal stochastic method for nonconvex nonsmooth black-box problems

E Kazemi, L Wang - Machine Learning, 2024 - Springer
Proximal gradient method has a major role in solving nonsmooth composite optimization
problems. However, in some machine learning problems related to black-box optimization …

A stochastic alternating direction method of multipliers for non-smooth and non-convex optimization

F Bian, J Liang, X Zhang - Inverse Problems, 2021 - iopscience.iop.org
Alternating direction method of multipliers (ADMM) is a popular first-order method owing to
its simplicity and efficiency. However, similar to other proximal splitting methods, the …

Nonconvex zeroth-order stochastic admm methods with lower function query complexity

F Huang, S Gao, J Pei, H Huang - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
Zeroth-order (aka, derivative-free) methods are a class of effective optimization methods for
solving complex machine learning problems, where gradients of the objective functions are …