[HTML][HTML] A review of uncertainty quantification in deep learning: Techniques, applications and challenges

M Abdar, F Pourpanah, S Hussain, D Rezazadegan… - Information fusion, 2021 - Elsevier
Uncertainty quantification (UQ) methods play a pivotal role in reducing the impact of
uncertainties during both optimization and decision making processes. They have been …

Stochastic gradient markov chain monte carlo

C Nemeth, P Fearnhead - Journal of the American Statistical …, 2021 - Taylor & Francis
Abstract Markov chain Monte Carlo (MCMC) algorithms are generally regarded as the gold
standard technique for Bayesian inference. They are theoretically well-understood and …

A survey of uncertainty in deep neural networks

J Gawlikowski, CRN Tassi, M Ali, J Lee, M Humt… - Artificial Intelligence …, 2023 - Springer
Over the last decade, neural networks have reached almost every field of science and
become a crucial part of various real world applications. Due to the increasing spread …

Score-based generative modeling with critically-damped langevin diffusion

T Dockhorn, A Vahdat, K Kreis - arxiv preprint arxiv:2112.07068, 2021 - arxiv.org
Score-based generative models (SGMs) have demonstrated remarkable synthesis quality.
SGMs rely on a diffusion process that gradually perturbs the data towards a tractable …

B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data

L Yang, X Meng, GE Karniadakis - Journal of Computational Physics, 2021 - Elsevier
We propose a Bayesian physics-informed neural network (B-PINN) to solve both forward
and inverse nonlinear problems described by partial differential equations (PDEs) and noisy …

A survey of optimization methods from a machine learning perspective

S Sun, Z Cao, H Zhu, J Zhao - IEEE transactions on cybernetics, 2019 - ieeexplore.ieee.org
Machine learning develops rapidly, which has made many theoretical breakthroughs and is
widely applied in various fields. Optimization, as an important part of machine learning, has …

How good is the bayes posterior in deep neural networks really?

F Wenzel, K Roth, BS Veeling, J Świątkowski… - arxiv preprint arxiv …, 2020 - arxiv.org
During the past five years the Bayesian deep learning community has developed
increasingly accurate and efficient approximate inference procedures that allow for …

Stochastic gradient descent as approximate bayesian inference

M Stephan, MD Hoffman, DM Blei - Journal of Machine Learning …, 2017 - jmlr.org
Stochastic Gradient Descent with a constant learning rate (constant SGD) simulates a
Markov chain with a stationary distribution. With this perspective, we derive several new …

Entropy-sgd: Biasing gradient descent into wide valleys

P Chaudhari, A Choromanska, S Soatto… - Journal of Statistical …, 2019 - iopscience.iop.org
This paper proposes a new optimization algorithm called Entropy-SGD for training deep
neural networks that is motivated by the local geometry of the energy landscape. Local …

Three factors influencing minima in sgd

S Jastrzębski, Z Kenton, D Arpit, N Ballas… - arxiv preprint arxiv …, 2017 - arxiv.org
We investigate the dynamical and convergent properties of stochastic gradient descent
(SGD) applied to Deep Neural Networks (DNNs). Characterizing the relation between …