Neural collapse: A review on modelling principles and generalization
V Kothapalli - arxiv preprint arxiv:2206.04041, 2022 - arxiv.org
Deep classifier neural networks enter the terminal phase of training (TPT) when training
error reaches zero and tend to exhibit intriguing Neural Collapse (NC) properties. Neural …
error reaches zero and tend to exhibit intriguing Neural Collapse (NC) properties. Neural …
On the optimization landscape of neural collapse under mse loss: Global optimality with unconstrained features
When training deep neural networks for classification tasks, an intriguing empirical
phenomenon has been widely observed in the last-layer classifiers and features, where (i) …
phenomenon has been widely observed in the last-layer classifiers and features, where (i) …
A geometric analysis of neural collapse with unconstrained features
We provide the first global optimization landscape analysis of Neural Collapse--an intriguing
empirical phenomenon that arises in the last-layer classifiers and features of neural …
empirical phenomenon that arises in the last-layer classifiers and features of neural …
Understanding imbalanced semantic segmentation through neural collapse
A recent study has shown a phenomenon called neural collapse in that the within-class
means of features and the classifier weight vectors converge to the vertices of a simplex …
means of features and the classifier weight vectors converge to the vertices of a simplex …
Inducing neural collapse in imbalanced learning: Do we really need a learnable classifier at the end of deep neural network?
Modern deep neural networks for classification usually jointly learn a backbone for
representation and a linear classifier to output the logit of each class. A recent study has …
representation and a linear classifier to output the logit of each class. A recent study has …
Extended unconstrained features model for exploring deep neural collapse
The modern strategy for training deep neural networks for classification tasks includes
optimizing the network's weights even after the training error vanishes to further push the …
optimizing the network's weights even after the training error vanishes to further push the …
Are all losses created equal: A neural collapse perspective
While cross entropy (CE) is the most commonly used loss function to train deep neural
networks for classification tasks, many alternative losses have been developed to obtain …
networks for classification tasks, many alternative losses have been developed to obtain …
Imbalance trouble: Revisiting neural-collapse geometry
Neural Collapse refers to the remarkable structural properties characterizing the geometry of
class embeddings and classifier weights, found by deep nets when trained beyond zero …
class embeddings and classifier weights, found by deep nets when trained beyond zero …
Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training
In this paper, we introduce the Layer-Peeled Model, a nonconvex, yet analytically tractable,
optimization program, in a quest to better understand deep neural networks that are trained …
optimization program, in a quest to better understand deep neural networks that are trained …
Neural collapse under mse loss: Proximity to and dynamics on the central path
The recently discovered Neural Collapse (NC) phenomenon occurs pervasively in today's
deep net training paradigm of driving cross-entropy (CE) loss towards zero. During NC, last …
deep net training paradigm of driving cross-entropy (CE) loss towards zero. During NC, last …