AUC maximization for low-resource named entity recognition
Current work in named entity recognition (NER) uses either cross entropy (CE) or
conditional random fields (CRF) as the objective/loss functions to optimize the underlying …
conditional random fields (CRF) as the objective/loss functions to optimize the underlying …
Coltr: Semi-supervised learning to rank with co-training and over-parameterization for web search
While learning to rank (LTR) has been widely used in web search to prioritize most relevant
webpages among the retrieved contents subject to the input queries, the traditional LTR …
webpages among the retrieved contents subject to the input queries, the traditional LTR …
Uncertainty-aware graph-based hyperspectral image classification
Hyperspectral imaging (HSI) technology captures spectral information across a broad
wavelength range, providing richer pixel features compared to traditional color images with …
wavelength range, providing richer pixel features compared to traditional color images with …
Understanding and bridging the gap between unsupervised network representation learning and security analytics
Cyber-attacks have become increasingly sophisticated, which also drives the development
of security analytics that produce countermeasures by mining organizational logs, eg …
of security analytics that produce countermeasures by mining organizational logs, eg …
AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation
The Area Under the ROC Curve (AUC) is a well-known metric for evaluating instance-level
long-tail learning problems. In the past two decades, many AUC optimization methods have …
long-tail learning problems. In the past two decades, many AUC optimization methods have …
DRAUC: an instance-wise distributionally robust AUC optimization framework
Abstract The Area Under the ROC Curve (AUC) is a widely employed metric in long-tailed
classification scenarios. Nevertheless, most existing methods primarily assume that training …
classification scenarios. Nevertheless, most existing methods primarily assume that training …
Optimal large-scale stochastic optimization of NDCG surrogates for deep learning
In this paper, we introduce principled stochastic algorithms to efficiently optimize Normalized
Discounted Cumulative Gain (NDCG) and its top-K variant for deep models. To this end, we …
Discounted Cumulative Gain (NDCG) and its top-K variant for deep models. To this end, we …
A self-supervised learning approach for registration agnostic imaging models with 3D brain CTA
Deep learning-based neuroimaging pipelines for acute stroke typically rely on image
registration, which not only increases computation but also introduces a point of failure. In …
registration, which not only increases computation but also introduces a point of failure. In …
Boosting Few-Shot Learning with Disentangled Self-Supervised Learning and Meta-Learning for Medical Image Classification
Background and objective: Employing deep learning models in critical domains such as
medical imaging poses challenges associated with the limited availability of training data …
medical imaging poses challenges associated with the limited availability of training data …
AUC Optimization from Multiple Unlabeled Datasets
Weakly supervised learning aims to make machine learning more powerful when the perfect
supervision is unavailable, and has attracted much attention from researchers. Among the …
supervision is unavailable, and has attracted much attention from researchers. Among the …