[HTML][HTML] Applications and techniques for fast machine learning in science
In this community review report, we discuss applications and techniques for fast machine
learning (ML) in science—the concept of integrating powerful ML methods into the real-time …
learning (ML) in science—the concept of integrating powerful ML methods into the real-time …
Survey of optimization algorithms in modern neural networks
The main goal of machine learning is the creation of self-learning algorithms in many areas
of human activity. It allows a replacement of a person with artificial intelligence in seeking to …
of human activity. It allows a replacement of a person with artificial intelligence in seeking to …
Adabelief optimizer: Adapting stepsizes by the belief in observed gradients
Most popular optimizers for deep learning can be broadly categorized as adaptive methods
(eg~ Adam) and accelerated schemes (eg~ stochastic gradient descent (SGD) with …
(eg~ Adam) and accelerated schemes (eg~ stochastic gradient descent (SGD) with …
A modified Adam algorithm for deep neural network optimization
Abstract Deep Neural Networks (DNNs) are widely regarded as the most effective learning
tool for dealing with large datasets, and they have been successfully used in thousands of …
tool for dealing with large datasets, and they have been successfully used in thousands of …
Sophia: A scalable stochastic second-order optimizer for language model pre-training
Given the massive cost of language model pre-training, a non-trivial improvement of the
optimization algorithm would lead to a material reduction on the time and cost of training …
optimization algorithm would lead to a material reduction on the time and cost of training …
On the effectiveness of parameter-efficient fine-tuning
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range
of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always …
of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always …
The effect of choosing optimizer algorithms to improve computer vision tasks: a comparative study
Optimization algorithms are used to improve model accuracy. The optimization process
undergoes multiple cycles until convergence. A variety of optimization strategies have been …
undergoes multiple cycles until convergence. A variety of optimization strategies have been …
The right to be forgotten in federated learning: An efficient realization with rapid retraining
In Machine Learning, the emergence of the right to be forgotten gave birth to a paradigm
named machine unlearning, which enables data holders to proactively erase their data from …
named machine unlearning, which enables data holders to proactively erase their data from …
Pyhessian: Neural networks through the lens of the hessian
We present PYHESSIAN, a new scalable framework that enables fast computation of
Hessian (ie, second-order derivative) information for deep neural networks. PYHESSIAN …
Hessian (ie, second-order derivative) information for deep neural networks. PYHESSIAN …
How important are activation functions in regression and classification? A survey, performance comparison, and future directions
Inspired by biological neurons, the activation functions play an essential part in the learning
process of any artificial neural network (ANN) commonly used in many real-world problems …
process of any artificial neural network (ANN) commonly used in many real-world problems …