Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Inducing neural collapse in imbalanced learning: Do we really need a learnable classifier at the end of deep neural network?
Modern deep neural networks for classification usually jointly learn a backbone for
representation and a linear classifier to output the logit of each class. A recent study has …
representation and a linear classifier to output the logit of each class. A recent study has …
Arcface: Additive angular margin loss for deep face recognition
One of the main challenges in feature learning using Deep Convolutional Neural Networks
(DCNNs) for large-scale face recognition is the design of appropriate loss functions that can …
(DCNNs) for large-scale face recognition is the design of appropriate loss functions that can …
Neural collapse with normalized features: A geometric analysis over the riemannian manifold
When training overparameterized deep networks for classification tasks, it has been widely
observed that the learned features exhibit a so-called" neural collapse'" phenomenon. More …
observed that the learned features exhibit a so-called" neural collapse'" phenomenon. More …
Class-incremental learning with pre-allocated fixed classifiers
In class-incremental learning, a learning agent faces a stream of data with the goal of
learning new classes while not forgetting previous ones. Neural networks are known to …
learning new classes while not forgetting previous ones. Neural networks are known to …
Stationary representations: Optimally approximating compatibility and implications for improved model replacements
Learning compatible representations enables the interchangeable use of semantic features
as models are updated over time. This is particularly relevant in search and retrieval systems …
as models are updated over time. This is particularly relevant in search and retrieval systems …
Inducing neural collapse to a fixed hierarchy-aware frame for reducing mistake severity
T Liang, J Davis - … of the IEEE/CVF International Conference …, 2023 - openaccess.thecvf.com
There is a recently discovered and intriguing phenomenon called Neural Collapse: at the
terminal phase of training a deep neural network for classification, the within-class …
terminal phase of training a deep neural network for classification, the within-class …
Regular polytope networks
Neural networks are widely used as a model for classification in a large variety of tasks.
Typically, a learnable transformation (ie, the classifier) is placed at the end of such models …
Typically, a learnable transformation (ie, the classifier) is placed at the end of such models …
On modality bias recognition and reduction
Making each modality in multi-modal data contribute is of vital importance to learning a
versatile multi-modal model. Existing methods, however, are often dominated by one or few …
versatile multi-modal model. Existing methods, however, are often dominated by one or few …
Cores: Compatible representations via stationarity
Compatible features enable the direct comparison of old and new learned features allowing
to use them interchangeably over time. In visual search systems, this eliminates the need to …
to use them interchangeably over time. In visual search systems, this eliminates the need to …
Fine-grained adversarial semi-supervised learning
In this article, we exploit Semi-Supervised Learning (SSL) to increase the amount of training
data to improve the performance of Fine-Grained Visual Categorization (FGVC). This …
data to improve the performance of Fine-Grained Visual Categorization (FGVC). This …