Auto-train-once: Controller network guided automatic network pruning from scratch
Current techniques for deep neural network (DNN) pruning often involve intricate multi-step
processes that require domain-specific expertise making their widespread adoption …
processes that require domain-specific expertise making their widespread adoption …
Solving a class of non-convex minimax optimization in federated learning
The minimax problems arise throughout machine learning applications, ranging from
adversarial training and policy evaluation in reinforcement learning to AUROC …
adversarial training and policy evaluation in reinforcement learning to AUROC …
A faster decentralized algorithm for nonconvex minimax problems
In this paper, we study the nonconvex-strongly-concave minimax optimization problem on
decentralized setting. The minimax problems are attracting increasing attentions because of …
decentralized setting. The minimax problems are attracting increasing attentions because of …
Sapd+: An accelerated stochastic method for nonconvex-concave minimax problems
We propose a new stochastic method SAPD+ for solving nonconvex-concave minimax
problems of the form $\min\max\mathcal {L}(x, y)= f (x)+\Phi (x, y)-g (y) $, where $ f, g $ are …
problems of the form $\min\max\mathcal {L}(x, y)= f (x)+\Phi (x, y)-g (y) $, where $ f, g $ are …
Nest your adaptive algorithm for parameter-agnostic nonconvex minimax optimization
Adaptive algorithms like AdaGrad and AMSGrad are successful in nonconvex optimization
owing to their parameter-agnostic ability–requiring no a priori knowledge about problem …
owing to their parameter-agnostic ability–requiring no a priori knowledge about problem …
Decentralized riemannian algorithm for nonconvex minimax problems
The minimax optimization over Riemannian manifolds (possibly nonconvex constraints) has
been actively applied to solve many problems, such as robust dimensionality reduction and …
been actively applied to solve many problems, such as robust dimensionality reduction and …
Two-timescale gradient descent ascent algorithms for nonconvex minimax optimization
We provide a unified analysis of two-timescale gradient descent ascent (TTGDA) for solving
structured nonconvex minimax optimization problems in the form of $\min_x\max_ {y\in Y} f …
structured nonconvex minimax optimization problems in the form of $\min_x\max_ {y\in Y} f …
An augmented Lagrangian deep learning method for variational problems with essential boundary conditions
J Huang, H Wang, T Zhou - arxiv preprint arxiv:2106.14348, 2021 - arxiv.org
This paper is concerned with a novel deep learning method for variational problems with
essential boundary conditions. To this end, we first reformulate the original problem into a …
essential boundary conditions. To this end, we first reformulate the original problem into a …
Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave Min-Max Problems with PL Condition
This paper focuses on stochastic methods for solving smooth non-convex strongly-concave
min-max problems, which have received increasing attention due to their potential …
min-max problems, which have received increasing attention due to their potential …
Gradient descent ascent for minimax problems on Riemannian manifolds
In the paper, we study a class of useful minimax problems on Riemanian manifolds and
propose a class of effective Riemanian gradient-based methods to solve these minimax …
propose a class of effective Riemanian gradient-based methods to solve these minimax …