The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey

J Vatter, R Mayer, HA Jacobsen - ACM Computing Surveys, 2023 - dl.acm.org
Graph neural networks (GNNs) are an emerging research field. This specialized deep
neural network architecture is capable of processing graph structured data and bridges the …

Variance-reduced methods for machine learning

RM Gower, M Schmidt, F Bach… - Proceedings of the …, 2020 - ieeexplore.ieee.org
Stochastic optimization lies at the heart of machine learning, and its cornerstone is
stochastic gradient descent (SGD), a method introduced over 60 years ago. The last eight …

A survey of optimization methods from a machine learning perspective

S Sun, Z Cao, H Zhu, J Zhao - IEEE transactions on cybernetics, 2019 - ieeexplore.ieee.org
Machine learning develops rapidly, which has made many theoretical breakthroughs and is
widely applied in various fields. Optimization, as an important part of machine learning, has …

Deltagrad: Rapid retraining of machine learning models

Y Wu, E Dobriban, S Davidson - International Conference on …, 2020 - proceedings.mlr.press
Abstract Machine learning models are not static and may need to be retrained on slightly
changed datasets, for instance, with the addition or deletion of a set of data points. This has …

[PDF][PDF] Comparison of naive bayes, random forest, decision tree, support vector machines, and logistic regression classifiers for text reviews classification

T Pranckevičius, V Marcinkevičius - Baltic Journal of Modern Computing, 2017 - bjmc.lu.lv
Today, a largely scalable computing environment provides a possibility of carrying out
various data-intensive natural language processing and machine-learning tasks. One of …

Incremental learning algorithms and applications

A Gepperth, B Hammer - European symposium on artificial neural …, 2016 - hal.science
Incremental learning refers to learning from streaming data, which arrive over time, with
limited memory resources and, ideally, without sacrificing model accuracy. This setting fits …

A stochastic quasi-Newton method for large-scale optimization

RH Byrd, SL Hansen, J Nocedal, Y Singer - SIAM Journal on Optimization, 2016 - SIAM
The question of how to incorporate curvature information into stochastic approximation
methods is challenging. The direct application of classical quasi-Newton updating …

A linearly-convergent stochastic L-BFGS algorithm

P Moritz, R Nishihara, M Jordan - Artificial Intelligence and …, 2016 - proceedings.mlr.press
We propose a new stochastic L-BFGS algorithm and prove a linear convergence rate for
strongly convex and smooth functions. Our algorithm draws heavily from a recent stochastic …

A progressive batching L-BFGS method for machine learning

R Bollapragada, J Nocedal… - International …, 2018 - proceedings.mlr.press
The standard L-BFGS method relies on gradient approximations that are not dominated by
noise, so that search directions are descent directions, the line search is reliable, and quasi …

Stochastic quasi-Newton methods for nonconvex stochastic optimization

X Wang, S Ma, D Goldfarb, W Liu - SIAM Journal on Optimization, 2017 - SIAM
In this paper we study stochastic quasi-Newton methods for nonconvex stochastic
optimization, where we assume that noisy information about the gradients of the objective …