Inductive biases for deep learning of higher-level cognition

A Goyal, Y Bengio - Proceedings of the Royal Society A, 2022 - royalsocietypublishing.org
A fascinating hypothesis is that human and animal intelligence could be explained by a few
principles (rather than an encyclopaedic list of heuristics). If that hypothesis was correct, we …

Uncertainty quantification in machine learning for engineering design and health prognostics: A tutorial

V Nemani, L Biggio, X Huan, Z Hu, O Fink… - … Systems and Signal …, 2023 - Elsevier
On top of machine learning (ML) models, uncertainty quantification (UQ) functions as an
essential layer of safety assurance that could lead to more principled decision making by …

Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons

AF Psaros, X Meng, Z Zou, L Guo… - Journal of Computational …, 2023 - Elsevier
Neural networks (NNs) are currently changing the computational paradigm on how to
combine data with mathematical laws in physics and engineering in a profound way …

Human–machine collaboration for improving semiconductor process development

KJ Kanarik, WT Osowiecki, Y Lu, D Talukder… - Nature, 2023 - nature.com
One of the bottlenecks to building semiconductor chips is the increasing cost required to
develop chemical plasma processes that form the transistors and memory storage cells …

Repulsive deep ensembles are bayesian

F D'Angelo, V Fortuin - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Deep ensembles have recently gained popularity in the deep learning community for their
conceptual simplicity and efficiency. However, maintaining functional diversity between …

Eliciting and learning with soft labels from every annotator

KM Collins, U Bhatt, A Weller - Proceedings of the AAAI conference on …, 2022 - ojs.aaai.org
The labels used to train machine learning (ML) models are of paramount importance.
Typically for ML classification tasks, datasets contain hard labels, yet learning using soft …

Dangers of Bayesian model averaging under covariate shift

P Izmailov, P Nicholson, S Lotfi… - Advances in Neural …, 2021 - proceedings.neurips.cc
Approximate Bayesian inference for neural networks is considered a robust alternative to
standard training, often providing good performance on out-of-distribution data. However …

Solution of physics-based inverse problems using conditional generative adversarial networks with full gradient penalty

D Ray, J Murgoitio-Esandi, A Dasgupta… - Computer Methods in …, 2023 - Elsevier
The solution of probabilistic inverse problems for which the corresponding forward problem
is constrained by physical principles is challenging. This is especially true if the dimension of …

Is novelty predictable?

C Fannjiang, J Listgarten - Cold Spring Harbor …, 2024 - cshperspectives.cshlp.org
Machine learning–based design has gained traction in the sciences, most notably in the
design of small molecules, materials, and proteins, with societal applications ranging from …

[HTML][HTML] Do we really need a new theory to understand over-parameterization?

L Oneto, S Ridella, D Anguita - Neurocomputing, 2023 - Elsevier
This century saw an unprecedented increase of public and private investments in Artificial
Intelligence (AI) and especially in (Deep) Machine Learning (ML). This led to breakthroughs …