Gradient boosting machines, a tutorial
A Natekin, A Knoll - Frontiers in neurorobotics, 2013 - frontiersin.org
Gradient boosting machines are a family of powerful machine-learning techniques that have
shown considerable success in a wide range of practical applications. They are highly …
shown considerable success in a wide range of practical applications. They are highly …
Recent advances in recurrent neural networks
Recurrent neural networks (RNNs) are capable of learning features and long term
dependencies from sequential and time-series data. The RNNs have a stack of non-linear …
dependencies from sequential and time-series data. The RNNs have a stack of non-linear …
Advances and open problems in federated learning
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …
devices or whole organizations) collaboratively train a model under the orchestration of a …
On the importance of initialization and momentum in deep learning
Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models
that were considered to be almost impossible to train using stochastic gradient descent with …
that were considered to be almost impossible to train using stochastic gradient descent with …
A deep convolutional neural network using directional wavelets for low‐dose X‐ray CT reconstruction
Purpose Due to the potential risk of inducing cancer, radiation exposure by X‐ray CT
devices should be reduced for routine patient scanning. However, in low‐dose X‐ray CT …
devices should be reduced for routine patient scanning. However, in low‐dose X‐ray CT …
Efficient mini-batch training for stochastic optimization
Stochastic gradient descent (SGD) is a popular technique for large-scale optimization
problems in machine learning. In order to parallelize SGD, minibatch training needs to be …
problems in machine learning. In order to parallelize SGD, minibatch training needs to be …
SARAH: A novel method for machine learning problems using stochastic recursive gradient
In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its
practical variant SARAH+, as a novel approach to the finite-sum minimization problems …
practical variant SARAH+, as a novel approach to the finite-sum minimization problems …
Direction-of-arrival estimation based on deep neural networks with robustness to array imperfections
Lacking of adaptation to various array imperfections is an open problem for most high-
precision direction-of-arrival (DOA) estimation methods. Machine learning-based methods …
precision direction-of-arrival (DOA) estimation methods. Machine learning-based methods …
[HTML][HTML] The state-of-the-art review on applications of intrusive sensing, image processing techniques, and machine learning methods in pavement monitoring and …
In modern transportation, pavement is one of the most important civil infrastructures for the
movement of vehicles and pedestrians. Pavement service quality and service life are of …
movement of vehicles and pedestrians. Pavement service quality and service life are of …
Is local SGD better than minibatch SGD?
We study local SGD (also known as parallel SGD and federated SGD), a natural and
frequently used distributed optimization method. Its theoretical foundations are currently …
frequently used distributed optimization method. Its theoretical foundations are currently …