Calibrated stackelberg games: Learning optimal commitments against calibrated agents

N Haghtalab, C Podimata… - Advances in Neural …, 2023 - proceedings.neurips.cc
In this paper, we introduce a generalization of the standard Stackelberg Games (SGs)
framework: Calibrated Stackelberg Games. In CSGs, a principal repeatedly interacts with an …

On-demand sampling: Learning optimally from multiple distributions

N Haghtalab, M Jordan, E Zhao - Advances in Neural …, 2022 - proceedings.neurips.cc
Societal and real-world considerations such as robustness, fairness, social welfare and multi-
agent tradeoffs have given rise to multi-distribution learning paradigms, such as …

Optimal multi-distribution learning

Z Zhang, W Zhan, Y Chen, SS Du… - The Thirty Seventh …, 2024 - proceedings.mlr.press
Abstract Multi-distribution learning (MDL), which seeks to learn a shared model that
minimizes the worst-case risk across $ k $ distinct data distributions, has emerged as a …

Group-wise oracle-efficient algorithms for online multi-group learning

S Deng, J Liu, DJ Hsu - Advances in Neural Information …, 2025 - proceedings.neurips.cc
We study the problem of online multi-group learning, a learning model in which an online
learner must simultaneously achieve small prediction regret on a large collection of …

When is Multicalibration Post-Processing Necessary?

D Hansen, S Devic, P Nakkiran… - Advances in Neural …, 2025 - proceedings.neurips.cc
Calibration is a well-studied property of predictors which guarantees meaningful uncertainty
estimates. Multicalibration is a related notion---originating in algorithmic fairness---which …

Fairness-Aware Estimation of Graphical Models

Z Zhou, D Ataee Tarzanagh, B Hou… - Advances in Neural …, 2025 - proceedings.neurips.cc
This paper examines the issue of fairness in the estimation of graphical models (GMs),
particularly Gaussian, Covariance, and Ising models. These models play a vital role in …

Truthfulness of Calibration Measures

N Haghtalab, M Qiao, K Yang… - Advances in Neural …, 2025 - proceedings.neurips.cc
We study calibration measures in a sequential prediction setup. In addition to rewarding
accurate predictions (completeness) and penalizing incorrect ones (soundness), an …

Convergence of for Gradient-Based Algorithms in Zero-Sum Games without the Condition Number: A Smoothed Analysis

I Anagnostides, T Sandholm - Advances in Neural …, 2025 - proceedings.neurips.cc
Gradient-based algorithms have shown great promise in solving large (two-player) zero-sum
games. However, their success has been mostly confined to the low-precision regime since …

Stability and multigroup fairness in ranking with uncertain predictions

S Devic, A Korolova, D Kempe, V Sharan - arxiv preprint arxiv …, 2024 - arxiv.org
Rankings are ubiquitous across many applications, from search engines to hiring
committees. In practice, many rankings are derived from the output of predictors. However …

Online Mirror Descent for Tchebycheff Scalarization in Multi-Objective Optimization

M Liu, X Zhang, C **e, K Donahue, H Zhao - arxiv preprint arxiv …, 2024 - arxiv.org
The goal of multi-objective optimization (MOO) is to learn under multiple, potentially
conflicting, objectives. One widely used technique to tackle MOO is through linear …