Deep neural networks tend to extrapolate predictably

K Kang, A Setlur, C Tomlin, S Levine - arxiv preprint arxiv:2310.00873, 2023 - arxiv.org
Conventional wisdom suggests that neural network predictions tend to be unpredictable and
overconfident when faced with out-of-distribution (OOD) inputs. Our work reassesses this …

Learning Low Dimensional State Spaces with Overparameterized Recurrent Neural Nets

E Cohen-Karlik, I Menuhin-Gruman, R Giryes… - arxiv preprint arxiv …, 2022 - arxiv.org
Overparameterization in deep learning typically refers to settings where a trained neural
network (NN) has representational capacity to fit the training data in many ways, some of …

The Implicit Bias of Structured State Space Models Can Be Poisoned With Clean Labels

Y Slutzky, Y Alexander, N Razin, N Cohen - arxiv preprint arxiv …, 2024 - arxiv.org
Neural networks are powered by an implicit bias: a tendency of gradient descent to fit
training data in a way that generalizes to unseen data. A recent class of neural network …