Efficient approximation of high-dimensional functions with neural networks P Cheridito, A Jentzen, F Rossmannek IEEE Transactions on Neural Networks and Learning Systems 33 (7), 3079-3093, 2021 | 58* | 2021 |
Non-convergence of stochastic gradient descent in the training of deep neural networks P Cheridito, A Jentzen, F Rossmannek Journal of Complexity 64, 101540, 2021 | 48 | 2021 |
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions P Cheridito, A Jentzen, A Riekert, F Rossmannek Journal of Complexity 72, 101646, 2022 | 31 | 2022 |
Landscape analysis for shallow neural networks: complete classification of critical points for affine target functions P Cheridito, A Jentzen, F Rossmannek Journal of Nonlinear Science 32 (5), 64, 2022 | 21 | 2022 |
Efficient Sobolev approximation of linear parabolic PDEs in high dimensions P Cheridito, F Rossmannek arXiv preprint arXiv:2306.16811, 2023 | 5 | 2023 |
Gradient descent provably escapes saddle points in the training of shallow ReLU networks P Cheridito, A Jentzen, F Rossmannek Journal of Optimization Theory and Applications, 1-32, 2024 | 4 | 2024 |
State-space systems as dynamic generative models JP Ortega, F Rossmannek arXiv preprint arXiv:2404.08717, 2024 | 1 | 2024 |
Fading memory and the convolution theorem JP Ortega, F Rossmannek arXiv preprint arXiv:2408.07386, 2024 | | 2024 |
The curse of dimensionality and gradient-based training of neural networks: shrinking the gap between theory and applications F Rossmannek ETH Zurich, 2023 | | 2023 |