팔로우
Runa Eschenhagen
Runa Eschenhagen
cam.ac.uk의 이메일 확인됨 - 홈페이지
제목
인용
인용
연도
Laplace Redux--Effortless Bayesian Deep Learning
E Daxberger*, A Kristiadi*, A Immer*, R Eschenhagen*, M Bauer, ...
NeurIPS 2021, 2021
3322021
Practical deep learning with Bayesian principles
K Osawa, S Swaroop, A Jain, R Eschenhagen, RE Turner, R Yokota, ...
NeurIPS 2019, 2019
2942019
Continual deep learning by functional regularisation of memorable past
P Pan, S Swaroop, A Immer, R Eschenhagen, RE Turner, ME Khan
NeurIPS 2020, 2020
1522020
Benchmarking neural network training algorithms
GE Dahl, F Schneider, Z Nado, N Agarwal, CS Sastry, P Hennig, ...
arXiv preprint arXiv:2306.07179, 2023
252023
Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning
R Eschenhagen, E Daxberger, P Hennig, A Kristiadi
Bayesian Deep Learning Workshop, NeurIPS 2021, 2021
252021
Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures
R Eschenhagen, A Immer, RE Turner, F Schneider, P Hennig
NeurIPS 2023, 2023
192023
Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks
A Kristiadi, R Eschenhagen, P Hennig
NeurIPS 2022, 2022
122022
Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization
A Kristiadi, A Immer, R Eschenhagen, V Fortuin
AABI 2023, 2023
82023
Approximate Bayesian neural operators: Uncertainty quantification for parametric PDEs
E Magnani, N Krämer, R Eschenhagen, L Rosasco, P Hennig
arXiv preprint arXiv:2208.01565, 2022
82022
Structured Inverse-Free Natural Gradient Descent: Memory-Efficient & Numerically-Stable KFAC
W Lin, F Dangel, R Eschenhagen, K Neklyudov, A Kristiadi, RE Turner, ...
ICML 2024, 0
8*
Can We Remove the Square-Root in Adaptive Gradient Methods? A Second-Order Perspective
W Lin, F Dangel, R Eschenhagen, J Bae, RE Turner, A Makhzani
ICML 2024, 2024
72024
Influence Functions for Scalable Data Attribution in Diffusion Models
B Mlodozeniec, R Eschenhagen, J Bae, A Immer, D Krueger, R Turner
ICLR 2025, 2024
12024
Natural Gradient Variational Inference for Continual Learning in Deep Neural Networks
R Eschenhagen
University of Osnabrück, 2019
12019
Spectral-factorized Positive-definite Curvature Learning for NN Training
W Lin, F Dangel, R Eschenhagen, J Bae, RE Turner, RB Grosse
arXiv preprint arXiv:2502.06268, 2025
2025
Position: Curvature Matrices Should Be Democratized via Linear Operators
F Dangel, R Eschenhagen, W Ormaniec, A Fernandez, L Tatzel, ...
arXiv preprint arXiv:2501.19183, 2025
2025
Fast Fractional Natural Gradient Descent using Learnable Spectral Factorizations
W Lin, F Dangel, R Eschenhagen, J Bae, RE Turner, RB Grosse
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–16