Convergence of the RMSProp deep learning method with penalty for nonconvex optimization D Xu, S Zhang, H Zhang, DP Mandic Neural Networks 139, 17-23, 2021 | 188 | 2021 |
Optimization in quaternion dynamic systems: Gradient, hessian, and learning algorithms D Xu, Y Xia, DP Mandic IEEE transactions on neural networks and learning systems 27 (2), 249-261, 2015 | 142 | 2015 |
Enabling quaternion derivatives: The generalized HR calculus D Xu, C Jahanchahi, CC Took, DP Mandic Royal Society open science 2 (8), 150255, 2015 | 124 | 2015 |
The theory of quaternion matrix derivatives D Xu, DP Mandic IEEE Transactions on Signal Processing 63 (6), 1543-1556, 2015 | 114 | 2015 |
Convergence analysis of an augmented algorithm for fully complex-valued neural networks D Xu, H Zhang, DP Mandic Neural Networks 69, 44-50, 2015 | 40 | 2015 |
Boundedness and convergence of split-complex back-propagation algorithm with momentum and penalty H Zhang, D Xu, Y Zhang Neural processing letters 39, 297-307, 2014 | 33 | 2014 |
Learning algorithms in quaternion neural networks using ghr calculus D Xu, L Zhang, H Zhang Neural Network World 27 (3), 271, 2017 | 32 | 2017 |
Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks D Xu, H Zhang, L Liu Neural computation 22 (10), 2655-2677, 2010 | 31 | 2010 |
Relaxed conditions for convergence of batch BPAP for feedforward neural networks H Shao, J Wang, L Liu, D Xu, W Bao Neurocomputing 153, 174-179, 2015 | 25 | 2015 |
Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization J Liu, J Kong, D Xu, M Qi, Y Lu Neural Networks 145, 300-307, 2022 | 24 | 2022 |
Convergence analysis of fully complex backpropagation algorithm based on Wirtinger calculus H Zhang, X Liu, D Xu, Y Zhang Cognitive neurodynamics 8, 261-266, 2014 | 24 | 2014 |
A new adaptive momentum algorithm for split-complex recurrent neural networks D Xu, H Shao, H Zhang Neurocomputing 93, 133-136, 2012 | 18 | 2012 |
Convergence of gradient method for Elman networks W Wu, D Xu, Z Li Applied Mathematics and Mechanics 29 (9), 1231-1238, 2008 | 17 | 2008 |
Deterministic convergence of complex mini-batch gradient learning algorithm for fully complex-valued neural networks H Zhang, Y Zhang, S Zhu, D Xu Neurocomputing 407, 185-193, 2020 | 13 | 2020 |
Convergence of gradient descent algorithm for diagonal recurrent neural networks D Xu, Z Li, W Wu, X Ding, D Qu 2007 Second International Conference on Bio-Inspired Computing: Theories and …, 2007 | 13 | 2007 |
Convergence of gradient descent algorithm for a recurrent neuron D Xu, Z Li, W Wu, X Ding, D Qu Advances in Neural Networks–ISNN 2007: 4th International Symposium on Neural …, 2007 | 13* | 2007 |
Convergence of gradient method for a fully recurrent neural network D Xu, Z Li, W Wu Soft Computing 14, 245-250, 2010 | 12 | 2010 |
On hyper-parameter selection for guaranteed convergence of RMSProp J Liu, D Xu, H Zhang, D Mandic Cognitive Neurodynamics 18 (6), 3227-3237, 2024 | 11 | 2024 |
A decreasing scaling transition scheme from Adam to SGD K Zeng, J Liu, Z Jiang, D Xu Advanced Theory and Simulations 5 (7), 2100599, 2022 | 11 | 2022 |
The augmented complex-valued extreme learning machine H Zhang, Y Wang, D Xu, J Wang, L Xu Neurocomputing 311, 363-372, 2018 | 11 | 2018 |