A survey of distributed optimization
In distributed optimization of multi-agent systems, agents cooperate to minimize a global
function which is a sum of local objective functions. Motivated by applications including …
function which is a sum of local objective functions. Motivated by applications including …
Distributed optimization for control
Advances in wired and wireless technology have necessitated the development of theory,
models, and tools to cope with the new challenges posed by large-scale control and …
models, and tools to cope with the new challenges posed by large-scale control and …
Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent
Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are
built in a centralized fashion. One bottleneck of centralized algorithms lies on high …
built in a centralized fashion. One bottleneck of centralized algorithms lies on high …
Network topology and communication-computation tradeoffs in decentralized optimization
In decentralized optimization, nodes cooperate to minimize an overall objective function that
is the sum (or average) of per-node private objective functions. Algorithms interleave local …
is the sum (or average) of per-node private objective functions. Algorithms interleave local …
Asynchronous decentralized parallel stochastic gradient descent
Most commonly used distributed machine learning systems are either synchronous or
centralized asynchronous. Synchronous algorithms like AllReduce-SGD perform poorly in a …
centralized asynchronous. Synchronous algorithms like AllReduce-SGD perform poorly in a …
Distributed stochastic gradient tracking methods
In this paper, we study the problem of distributed multi-agent optimization over a network,
where each agent possesses a local cost function that is smooth and strongly convex. The …
where each agent possesses a local cost function that is smooth and strongly convex. The …
[BOK][B] First-order and stochastic optimization methods for machine learning
G Lan - 2020 - Springer
Since its beginning, optimization has played a vital role in data science. The analysis and
solution methods for many statistical and machine learning models rely on optimization. The …
solution methods for many statistical and machine learning models rely on optimization. The …
Robust low-rank tensor recovery with rectification and alignment
Low-rank tensor recovery in the presence of sparse but arbitrary errors is an important
problem with many practical applications. In this work, we propose a general framework that …
problem with many practical applications. In this work, we propose a general framework that …
Communication-efficient algorithms for decentralized and stochastic optimization
We present a new class of decentralized first-order methods for nonsmooth and stochastic
optimization problems defined over multiagent networks. Considering that communication is …
optimization problems defined over multiagent networks. Considering that communication is …
Communication-efficient distributed deep learning: A comprehensive survey
Distributed deep learning (DL) has become prevalent in recent years to reduce training time
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …