Cross-entropy loss functions: Theoretical analysis and applications
Cross-entropy is a widely used loss function in applications. It coincides with the logistic loss
applied to the outputs of a neural network, when the softmax is used. But, what guarantees …
applied to the outputs of a neural network, when the softmax is used. But, what guarantees …
A theoretical analysis of NDCG type ranking measures
Ranking has been extensively studied in information retrieval, machine learning and
statistics. A central problem in ranking is to design a ranking measure for evaluation of …
statistics. A central problem in ranking is to design a ranking measure for evaluation of …
Fastxml: A fast, accurate and stable tree-classifier for extreme multi-label learning
The objective in extreme multi-label classification is to learn a classifier that can
automatically tag a data point with the most relevant subset of labels from a large label set …
automatically tag a data point with the most relevant subset of labels from a large label set …
-Consistency Bounds: Characterization and Extensions
A series of recent publications by Awasthi et al. have introduced the key notion of* $ H $-
consistency bounds* for surrogate loss functions. These are upper bounds on the zero-one …
consistency bounds* for surrogate loss functions. These are upper bounds on the zero-one …
Multi-Class -Consistency Bounds
We present an extensive study of $ H $-consistency bounds for multi-class classification.
These are upper bounds on the target loss estimation error of a predictor in a hypothesis set …
These are upper bounds on the target loss estimation error of a predictor in a hypothesis set …
Two-sided fairness in rankings via Lorenz dominance
We consider the problem of generating rankings that are fair towards both users and item
producers in recommender systems. We address both usual recommendation (eg, of music …
producers in recommender systems. We address both usual recommendation (eg, of music …
-Consistency Bounds for Pairwise Misranking Loss Surrogates
We present a detailed study of $ H $-consistency bounds for score-based ranking. These
are upper bounds on the target loss estimation error of a predictor in a hypothesis set $ H …
are upper bounds on the target loss estimation error of a predictor in a hypothesis set $ H …
Learning with fenchel-young losses
Over the past decades, numerous loss functions have been been proposed for a variety of
supervised learning tasks, including regression, classification, ranking, and more generally …
supervised learning tasks, including regression, classification, ranking, and more generally …
Ranking with abstention
We introduce a novel framework of ranking with abstention, where the learner can abstain
from making prediction at some limited cost $ c $. We present a extensive theoretical …
from making prediction at some limited cost $ c $. We present a extensive theoretical …
A cross-benchmark comparison of 87 learning to rank methods
Learning to rank is an increasingly important scientific field that comprises the use of
machine learning for the ranking task. New learning to rank methods are generally …
machine learning for the ranking task. New learning to rank methods are generally …