Challenges and opportunities of deep learning models for machinery fault detection and diagnosis: A review

SR Saufi, ZAB Ahmad, MS Leong, MH Lim - Ieee Access, 2019 - ieeexplore.ieee.org
In the age of industry 4.0, deep learning has attracted increasing interest for various
research applications. In recent years, deep learning models have been extensively …

Deep learning in electron microscopy

JM Ede - Machine Learning: Science and Technology, 2021 - iopscience.iop.org
Deep learning is transforming most areas of science and technology, including electron
microscopy. This review paper offers a practical perspective aimed at developers with …

Fedala: Adaptive local aggregation for personalized federated learning

J Zhang, Y Hua, H Wang, T Song, Z Xue… - Proceedings of the …, 2023 - ojs.aaai.org
A key challenge in federated learning (FL) is the statistical heterogeneity that impairs the
generalization of the global model on each client. To address this, we propose a method …

A modified Adam algorithm for deep neural network optimization

M Reyad, AM Sarhan, M Arafa - Neural Computing and Applications, 2023 - Springer
Abstract Deep Neural Networks (DNNs) are widely regarded as the most effective learning
tool for dealing with large datasets, and they have been successfully used in thousands of …

Quantum optimization of maximum independent set using Rydberg atom arrays

S Ebadi, A Keesling, M Cain, TT Wang, H Levine… - Science, 2022 - science.org
Realizing quantum speedup for practically relevant, computationally hard problems is a
central challenge in quantum information science. Using Rydberg atom arrays with up to …

Why transformers need adam: A hessian perspective

Y Zhang, C Chen, T Ding, Z Li… - Advances in Neural …, 2025 - proceedings.neurips.cc
SGD performs worse than Adam by a significant margin on Transformers, but the reason
remains unclear. In this work, we provide an explanation through the lens of Hessian:(i) …

Adan: Adaptive nesterov momentum algorithm for faster optimizing deep models

X **e, P Zhou, H Li, Z Lin, S Yan - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
In deep learning, different kinds of deep networks typically need different optimizers, which
have to be chosen after multiple trials, making the training process inefficient. To relieve this …

Adabelief optimizer: Adapting stepsizes by the belief in observed gradients

J Zhuang, T Tang, Y Ding… - Advances in neural …, 2020 - proceedings.neurips.cc
Most popular optimizers for deep learning can be broadly categorized as adaptive methods
(eg~ Adam) and accelerated schemes (eg~ stochastic gradient descent (SGD) with …

Adaptive federated optimization

S Reddi, Z Charles, M Zaheer, Z Garrett, K Rush… - arxiv preprint arxiv …, 2020 - arxiv.org
Federated learning is a distributed machine learning paradigm in which a large number of
clients coordinate with a central server to learn a model without sharing their own training …

The class imbalance problem in deep learning

K Ghosh, C Bellinger, R Corizzo, P Branco… - Machine Learning, 2024 - Springer
Deep learning has recently unleashed the ability for Machine learning (ML) to make
unparalleled strides. It did so by confronting and successfully addressing, at least to a …