The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey
Graph neural networks (GNNs) are an emerging research field. This specialized deep
neural network architecture is capable of processing graph structured data and bridges the …
neural network architecture is capable of processing graph structured data and bridges the …
Variance-reduced methods for machine learning
Stochastic optimization lies at the heart of machine learning, and its cornerstone is
stochastic gradient descent (SGD), a method introduced over 60 years ago. The last eight …
stochastic gradient descent (SGD), a method introduced over 60 years ago. The last eight …
A survey of optimization methods from a machine learning perspective
Machine learning develops rapidly, which has made many theoretical breakthroughs and is
widely applied in various fields. Optimization, as an important part of machine learning, has …
widely applied in various fields. Optimization, as an important part of machine learning, has …
Deltagrad: Rapid retraining of machine learning models
Abstract Machine learning models are not static and may need to be retrained on slightly
changed datasets, for instance, with the addition or deletion of a set of data points. This has …
changed datasets, for instance, with the addition or deletion of a set of data points. This has …
[PDF][PDF] Comparison of naive bayes, random forest, decision tree, support vector machines, and logistic regression classifiers for text reviews classification
T Pranckevičius, V Marcinkevičius - Baltic Journal of Modern Computing, 2017 - bjmc.lu.lv
Today, a largely scalable computing environment provides a possibility of carrying out
various data-intensive natural language processing and machine-learning tasks. One of …
various data-intensive natural language processing and machine-learning tasks. One of …
Incremental learning algorithms and applications
Incremental learning refers to learning from streaming data, which arrive over time, with
limited memory resources and, ideally, without sacrificing model accuracy. This setting fits …
limited memory resources and, ideally, without sacrificing model accuracy. This setting fits …
A stochastic quasi-Newton method for large-scale optimization
The question of how to incorporate curvature information into stochastic approximation
methods is challenging. The direct application of classical quasi-Newton updating …
methods is challenging. The direct application of classical quasi-Newton updating …
A linearly-convergent stochastic L-BFGS algorithm
We propose a new stochastic L-BFGS algorithm and prove a linear convergence rate for
strongly convex and smooth functions. Our algorithm draws heavily from a recent stochastic …
strongly convex and smooth functions. Our algorithm draws heavily from a recent stochastic …
A progressive batching L-BFGS method for machine learning
The standard L-BFGS method relies on gradient approximations that are not dominated by
noise, so that search directions are descent directions, the line search is reliable, and quasi …
noise, so that search directions are descent directions, the line search is reliable, and quasi …
Stochastic quasi-Newton methods for nonconvex stochastic optimization
In this paper we study stochastic quasi-Newton methods for nonconvex stochastic
optimization, where we assume that noisy information about the gradients of the objective …
optimization, where we assume that noisy information about the gradients of the objective …