Cross-entropy loss functions: Theoretical analysis and applications

A Mao, M Mohri, Y Zhong - International conference on …, 2023 - proceedings.mlr.press
Cross-entropy is a widely used loss function in applications. It coincides with the logistic loss
applied to the outputs of a neural network, when the softmax is used. But, what guarantees …

A theoretical analysis of NDCG type ranking measures

Y Wang, L Wang, Y Li, D He… - Conference on learning …, 2013 - proceedings.mlr.press
Ranking has been extensively studied in information retrieval, machine learning and
statistics. A central problem in ranking is to design a ranking measure for evaluation of …

Fastxml: A fast, accurate and stable tree-classifier for extreme multi-label learning

Y Prabhu, M Varma - Proceedings of the 20th ACM SIGKDD international …, 2014 - dl.acm.org
The objective in extreme multi-label classification is to learn a classifier that can
automatically tag a data point with the most relevant subset of labels from a large label set …

-Consistency Bounds: Characterization and Extensions

A Mao, M Mohri, Y Zhong - Advances in Neural Information …, 2024 - proceedings.neurips.cc
A series of recent publications by Awasthi et al. have introduced the key notion of* $ H $-
consistency bounds* for surrogate loss functions. These are upper bounds on the zero-one …

Multi-Class -Consistency Bounds

P Awasthi, A Mao, M Mohri… - Advances in neural …, 2022 - proceedings.neurips.cc
We present an extensive study of $ H $-consistency bounds for multi-class classification.
These are upper bounds on the target loss estimation error of a predictor in a hypothesis set …

Two-sided fairness in rankings via Lorenz dominance

V Do, S Corbett-Davies, J Atif… - Advances in Neural …, 2021 - proceedings.neurips.cc
We consider the problem of generating rankings that are fair towards both users and item
producers in recommender systems. We address both usual recommendation (eg, of music …

-Consistency Bounds for Pairwise Misranking Loss Surrogates

A Mao, M Mohri, Y Zhong - International conference on …, 2023 - proceedings.mlr.press
We present a detailed study of $ H $-consistency bounds for score-based ranking. These
are upper bounds on the target loss estimation error of a predictor in a hypothesis set $ H …

Learning with fenchel-young losses

M Blondel, AFT Martins, V Niculae - Journal of Machine Learning Research, 2020 - jmlr.org
Over the past decades, numerous loss functions have been been proposed for a variety of
supervised learning tasks, including regression, classification, ranking, and more generally …

Ranking with abstention

A Mao, M Mohri, Y Zhong - arxiv preprint arxiv:2307.02035, 2023 - arxiv.org
We introduce a novel framework of ranking with abstention, where the learner can abstain
from making prediction at some limited cost $ c $. We present a extensive theoretical …

A cross-benchmark comparison of 87 learning to rank methods

N Tax, S Bockting, D Hiemstra - Information processing & management, 2015 - Elsevier
Learning to rank is an increasingly important scientific field that comprises the use of
machine learning for the ranking task. New learning to rank methods are generally …