Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity
The ongoing exponential rise in recording capacity calls for new approaches for analysing
and interpreting neural data. Effective dimensionality has emerged as an important property …
and interpreting neural data. Effective dimensionality has emerged as an important property …
Signatures of task learning in neural representations
While neural plasticity has long been studied as the basis of learning, the growth of large-
scale neural recording techniques provides a unique opportunity to study how learning …
scale neural recording techniques provides a unique opportunity to study how learning …
Feature learning in deep classifiers through intermediate neural collapse
In this paper, we conduct an empirical study of the feature learning process in deep
classifiers. Recent research has identified a training phenomenon called Neural Collapse …
classifiers. Recent research has identified a training phenomenon called Neural Collapse …
Neural collapse with normalized features: A geometric analysis over the riemannian manifold
When training overparameterized deep networks for classification tasks, it has been widely
observed that the learned features exhibit a so-called" neural collapse'" phenomenon. More …
observed that the learned features exhibit a so-called" neural collapse'" phenomenon. More …
Neural representational geometry underlies few-shot concept learning
Understanding the neural basis of the remarkable human cognitive capacity to learn novel
concepts from just one or a few sensory experiences constitutes a fundamental problem. We …
concepts from just one or a few sensory experiences constitutes a fundamental problem. We …
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion
Neural networks need the right representations of input data to learn. Here we ask how
gradient-based learning shapes a fundamental property of representations in recurrent …
gradient-based learning shapes a fundamental property of representations in recurrent …
A rainbow in deep network black boxes
A central question in deep learning is to understand the functions learned by deep networks.
What is their approximation class? Do the learned weights and representations depend on …
What is their approximation class? Do the learned weights and representations depend on …
High-performing neural network models of visual cortex benefit from high latent dimensionality
Geometric descriptions of deep neural networks (DNNs) have the potential to uncover core
representational principles of computational models in neuroscience. Here we examined the …
representational principles of computational models in neuroscience. Here we examined the …
[HTML][HTML] Landscape and training regimes in deep learning
Deep learning algorithms are responsible for a technological revolution in a variety of tasks
including image recognition or Go playing. Yet, why they work is not understood. Ultimately …
including image recognition or Go playing. Yet, why they work is not understood. Ultimately …
Deep nonparametric regression on approximate manifolds: Nonasymptotic error bounds with polynomial prefactors
Deep nonparametric regression on approximate manifolds: Nonasymptotic error bounds with
polynomial prefactors Page 1 The Annals of Statistics 2023, Vol. 51, No. 2, 691–716 …
polynomial prefactors Page 1 The Annals of Statistics 2023, Vol. 51, No. 2, 691–716 …