Inductive biases for deep learning of higher-level cognition
A fascinating hypothesis is that human and animal intelligence could be explained by a few
principles (rather than an encyclopaedic list of heuristics). If that hypothesis was correct, we …
principles (rather than an encyclopaedic list of heuristics). If that hypothesis was correct, we …
Uncertainty quantification in machine learning for engineering design and health prognostics: A tutorial
On top of machine learning (ML) models, uncertainty quantification (UQ) functions as an
essential layer of safety assurance that could lead to more principled decision making by …
essential layer of safety assurance that could lead to more principled decision making by …
Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons
Neural networks (NNs) are currently changing the computational paradigm on how to
combine data with mathematical laws in physics and engineering in a profound way …
combine data with mathematical laws in physics and engineering in a profound way …
Human–machine collaboration for improving semiconductor process development
One of the bottlenecks to building semiconductor chips is the increasing cost required to
develop chemical plasma processes that form the transistors and memory storage cells …
develop chemical plasma processes that form the transistors and memory storage cells …
Repulsive deep ensembles are bayesian
Deep ensembles have recently gained popularity in the deep learning community for their
conceptual simplicity and efficiency. However, maintaining functional diversity between …
conceptual simplicity and efficiency. However, maintaining functional diversity between …
Eliciting and learning with soft labels from every annotator
The labels used to train machine learning (ML) models are of paramount importance.
Typically for ML classification tasks, datasets contain hard labels, yet learning using soft …
Typically for ML classification tasks, datasets contain hard labels, yet learning using soft …
Dangers of Bayesian model averaging under covariate shift
Approximate Bayesian inference for neural networks is considered a robust alternative to
standard training, often providing good performance on out-of-distribution data. However …
standard training, often providing good performance on out-of-distribution data. However …
Solution of physics-based inverse problems using conditional generative adversarial networks with full gradient penalty
The solution of probabilistic inverse problems for which the corresponding forward problem
is constrained by physical principles is challenging. This is especially true if the dimension of …
is constrained by physical principles is challenging. This is especially true if the dimension of …
Is novelty predictable?
Machine learning–based design has gained traction in the sciences, most notably in the
design of small molecules, materials, and proteins, with societal applications ranging from …
design of small molecules, materials, and proteins, with societal applications ranging from …
[HTML][HTML] Do we really need a new theory to understand over-parameterization?
This century saw an unprecedented increase of public and private investments in Artificial
Intelligence (AI) and especially in (Deep) Machine Learning (ML). This led to breakthroughs …
Intelligence (AI) and especially in (Deep) Machine Learning (ML). This led to breakthroughs …