Challenges and opportunities of deep learning models for machinery fault detection and diagnosis: A review

SR Saufi, ZAB Ahmad, MS Leong, MH Lim - Ieee Access, 2019 - ieeexplore.ieee.org
In the age of industry 4.0, deep learning has attracted increasing interest for various
research applications. In recent years, deep learning models have been extensively …

Deep learning in electron microscopy

JM Ede - Machine Learning: Science and Technology, 2021 - iopscience.iop.org
Deep learning is transforming most areas of science and technology, including electron
microscopy. This review paper offers a practical perspective aimed at developers with …

On the variance of the adaptive learning rate and beyond

L Liu, H Jiang, P He, W Chen, X Liu, J Gao… - arxiv preprint arxiv …, 2019 - arxiv.org
The learning rate warmup heuristic achieves remarkable success in stabilizing training,
accelerating convergence and improving generalization for adaptive stochastic optimization …

Quantum optimization of maximum independent set using Rydberg atom arrays

S Ebadi, A Keesling, M Cain, TT Wang, H Levine… - Science, 2022 - science.org
Realizing quantum speedup for practically relevant, computationally hard problems is a
central challenge in quantum information science. Using Rydberg atom arrays with up to …

Adaptive federated optimization

S Reddi, Z Charles, M Zaheer, Z Garrett, K Rush… - arxiv preprint arxiv …, 2020 - arxiv.org
Federated learning is a distributed machine learning paradigm in which a large number of
clients coordinate with a central server to learn a model without sharing their own training …

Adabelief optimizer: Adapting stepsizes by the belief in observed gradients

J Zhuang, T Tang, Y Ding… - Advances in neural …, 2020 - proceedings.neurips.cc
Most popular optimizers for deep learning can be broadly categorized as adaptive methods
(eg~ Adam) and accelerated schemes (eg~ stochastic gradient descent (SGD) with …

A metric learning reality check

K Musgrave, S Belongie, SN Lim - … , Glasgow, UK, August 23–28, 2020 …, 2020 - Springer
Deep metric learning papers from the past four years have consistently claimed great
advances in accuracy, often more than doubling the performance of decade-old methods. In …

Adan: Adaptive nesterov momentum algorithm for faster optimizing deep models

X **e, P Zhou, H Li, Z Lin, S Yan - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
In deep learning, different kinds of deep networks typically need different optimizers, which
have to be chosen after multiple trials, making the training process inefficient. To relieve this …

Towards theoretically understanding why sgd generalizes better than adam in deep learning

P Zhou, J Feng, C Ma, C **ong… - Advances in Neural …, 2020 - proceedings.neurips.cc
It is not clear yet why ADAM-alike adaptive gradient algorithms suffer from worse
generalization performance than SGD despite their faster training speed. This work aims to …

A modified Adam algorithm for deep neural network optimization

M Reyad, AM Sarhan, M Arafa - Neural Computing and Applications, 2023 - Springer
Abstract Deep Neural Networks (DNNs) are widely regarded as the most effective learning
tool for dealing with large datasets, and they have been successfully used in thousands of …