Gradient boosting machines, a tutorial

A Natekin, A Knoll - Frontiers in neurorobotics, 2013 - frontiersin.org
Gradient boosting machines are a family of powerful machine-learning techniques that have
shown considerable success in a wide range of practical applications. They are highly …

Recent advances in recurrent neural networks

H Salehinejad, S Sankar, J Barfett, E Colak… - arxiv preprint arxiv …, 2017 - arxiv.org
Recurrent neural networks (RNNs) are capable of learning features and long term
dependencies from sequential and time-series data. The RNNs have a stack of non-linear …

Advances and open problems in federated learning

P Kairouz, HB McMahan, B Avent… - … and trends® in …, 2021 - nowpublishers.com
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …

On the importance of initialization and momentum in deep learning

I Sutskever, J Martens, G Dahl… - … conference on machine …, 2013 - proceedings.mlr.press
Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models
that were considered to be almost impossible to train using stochastic gradient descent with …

A deep convolutional neural network using directional wavelets for low‐dose X‐ray CT reconstruction

E Kang, J Min, JC Ye - Medical physics, 2017 - Wiley Online Library
Purpose Due to the potential risk of inducing cancer, radiation exposure by X‐ray CT
devices should be reduced for routine patient scanning. However, in low‐dose X‐ray CT …

Efficient mini-batch training for stochastic optimization

M Li, T Zhang, Y Chen, AJ Smola - Proceedings of the 20th ACM …, 2014 - dl.acm.org
Stochastic gradient descent (SGD) is a popular technique for large-scale optimization
problems in machine learning. In order to parallelize SGD, minibatch training needs to be …

SARAH: A novel method for machine learning problems using stochastic recursive gradient

LM Nguyen, J Liu, K Scheinberg… - … conference on machine …, 2017 - proceedings.mlr.press
In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its
practical variant SARAH+, as a novel approach to the finite-sum minimization problems …

Direction-of-arrival estimation based on deep neural networks with robustness to array imperfections

ZM Liu, C Zhang, SY Philip - IEEE Transactions on Antennas …, 2018 - ieeexplore.ieee.org
Lacking of adaptation to various array imperfections is an open problem for most high-
precision direction-of-arrival (DOA) estimation methods. Machine learning-based methods …

[HTML][HTML] The state-of-the-art review on applications of intrusive sensing, image processing techniques, and machine learning methods in pavement monitoring and …

Y Hou, Q Li, C Zhang, G Lu, Z Ye, Y Chen, L Wang… - Engineering, 2021 - Elsevier
In modern transportation, pavement is one of the most important civil infrastructures for the
movement of vehicles and pedestrians. Pavement service quality and service life are of …

Is local SGD better than minibatch SGD?

B Woodworth, KK Patel, S Stich, Z Dai… - International …, 2020 - proceedings.mlr.press
We study local SGD (also known as parallel SGD and federated SGD), a natural and
frequently used distributed optimization method. Its theoretical foundations are currently …