Direct fit to nature: an evolutionary perspective on biological and artificial neural networks

U Hasson, SA Nastase, A Goldstein - Neuron, 2020 - cell.com
Evolution is a blind fitting process by which organisms become adapted to their
environment. Does the brain use similar brute-force fitting processes to learn how to …

Scaling description of generalization with number of parameters in deep learning

M Geiger, A Jacot, S Spigler, F Gabriel… - Journal of Statistical …, 2020 - iopscience.iop.org
Supervised deep learning involves the training of neural networks with a large number N of
parameters. For large enough N, in the so-called over-parametrized regime, one can …

A jamming transition from under-to over-parametrization affects generalization in deep learning

S Spigler, M Geiger, S d'Ascoli, L Sagun… - Journal of Physics A …, 2019 - iopscience.iop.org
In this paper we first recall the recent result that in deep networks a phase transition,
analogous to the jamming transition of granular media, delimits the over-and under …

A Bibliometrics-Based systematic review of safety risk assessment for IBS hoisting construction

Y Junjia, AH Alias, NA Haron, N Abu Bakar - Buildings, 2023 - mdpi.com
Construction faces many safety accidents with urbanization, particularly in hoisting.
However, there is a lack of systematic review studies in this area. This paper explored the …

A jamming transition from under-to over-parametrization affects loss landscape and generalization

S Spigler, M Geiger, S d'Ascoli, L Sagun… - arxiv preprint arxiv …, 2018 - arxiv.org
We argue that in fully-connected networks a phase transition delimits the over-and under-
parametrized regimes where fitting can or cannot be achieved. Under some general …

Geometric compression of invariant manifolds in neural networks

J Paccolat, L Petrini, M Geiger, K Tyloo… - Journal of Statistical …, 2021 - iopscience.iop.org
We study how neural networks compress uninformative input space in models where data
lie in d dimensions, but the labels of which only vary within a linear manifold of dimension …

Kernel memory networks: A unifying framework for memory modeling

G Iatropoulos, J Brea… - Advances in neural …, 2022 - proceedings.neurips.cc
We consider the problem of training a neural network to store a set of patterns with maximal
noise robustness. A solution, in terms of optimal weights and state update rules, is derived …

The loss surface of deep linear networks viewed through the algebraic geometry lens

D Mehta, T Chen, T Tang… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
By using the viewpoint of modern computational algebraic geometry, we explore properties
of the optimization landscapes of deep linear neural network models. After providing …

Learning by turning: Neural architecture aware optimisation

Y Liu, J Bernstein, M Meister… - … Conference on Machine …, 2021 - proceedings.mlr.press
Descent methods for deep networks are notoriously capricious: they require careful tuning of
step size, momentum and weight decay, and which method will work best on a new …

How isotropic kernels perform on simple invariants

J Paccolat, S Spigler, M Wyart - Machine Learning: Science and …, 2021 - iopscience.iop.org
How isotropic kernels perform on simple invariants - IOPscience Skip to content IOP
Science home Accessibility Help Search Journals Journals list Browse more than 100 …