Machine learning–accelerated computational fluid dynamics

D Kochkov, JA Smith, A Alieva, Q Wang… - Proceedings of the …, 2021 - pnas.org
Numerical simulation of fluids plays an essential role in modeling many physical
phenomena, such as weather, climate, aerodynamics, and plasma physics. Fluids are well …

Lipschitz recurrent neural networks

NB Erichson, O Azencot, A Queiruga… - arxiv preprint arxiv …, 2020 - arxiv.org
Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we
propose a recurrent unit that describes the hidden state's evolution with two parts: a well …

Heavy ball neural ordinary differential equations

H **a, V Suliafu, H Ji, T Nguyen… - Advances in …, 2021 - proceedings.neurips.cc
We propose heavy ball neural ordinary differential equations (HBNODEs), leveraging the
continuous limit of the classical momentum accelerated gradient descent, to improve neural …

Pyramid convolutional RNN for MRI image reconstruction

EZ Chen, P Wang, X Chen, T Chen… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Fast and accurate MRI image reconstruction from undersampled data is crucial in clinical
practice. Deep learning based reconstruction methods have shown promising advances in …

[HTML][HTML] Decentralized concurrent learning with coordinated momentum and restart

DE Ochoa, MU Javed, X Chen, JI Poveda - Systems & Control Letters, 2024 - Elsevier
This paper studies the stability and convergence properties of a class of multi-agent
concurrent learning (CL) algorithms with momentum and restart. Such algorithms can be …

Implicit graph neural networks: A monotone operator viewpoint

J Baker, Q Wang, CD Hauck… - … Conference on Machine …, 2023 - proceedings.mlr.press
Implicit graph neural networks (IGNNs)–that solve a fixed-point equilibrium equation using
Picard iteration for representation learning–have shown remarkable performance in learning …

Improving neural ordinary differential equations with nesterov's accelerated gradient method

HHN Nguyen, T Nguyen, H Vo… - Advances in Neural …, 2022 - proceedings.neurips.cc
We propose the Nesterov neural ordinary differential equations (NesterovNODEs), whose
layers solve the second-order ordinary differential equations (ODEs) limit of Nesterov's …

An automatic learning rate decay strategy for stochastic gradient descent optimization methods in neural networks

K Wang, Y Dou, T Sun, P Qiao… - International Journal of …, 2022 - Wiley Online Library
Abstract Stochastic Gradient Descent (SGD) series optimization methods play the vital role
in training neural networks, attracting growing attention in science and engineering fields of …

Attention network forecasts time‐to‐failure in laboratory shear experiments

H Jasperson, DC Bolton, P Johnson… - Journal of …, 2021 - Wiley Online Library
Rocks under stress deform by creep mechanisms that include formation and slip on small‐
scale internal cracks. Intragranular cracks and slip along grain contacts release energy as …

AdamR-GRUs: Adaptive momentum-based Regularized GRU for HMER problems

A Pal, KP Singh - Applied Soft Computing, 2023 - Elsevier
Abstract Handwritten Mathematical Expression Recognition (HMER) is essential to online
education and scientific research. However, discerning the length and characters of …