Large-scale methods for distributionally robust optimization

D Levy, Y Carmon, JC Duchi… - Advances in Neural …, 2020 - proceedings.neurips.cc
We propose and analyze algorithms for distributionally robust optimization of convex losses
with conditional value at risk (CVaR) and $\chi^ 2$ divergence uncertainty sets. We prove …

Correct-n-contrast: A contrastive approach for improving robustness to spurious correlations

M Zhang, NS Sohoni, HR Zhang, C Finn… - arxiv preprint arxiv …, 2022 - arxiv.org
Spurious correlations pose a major challenge for robust machine learning. Models trained
with empirical risk minimization (ERM) may learn to rely on correlations between class …

Probable domain generalization via quantile risk minimization

C Eastwood, A Robey, S Singh… - Advances in …, 2022 - proceedings.neurips.cc
Abstract Domain generalization (DG) seeks predictors which perform well on unseen test
distributions by leveraging data drawn from multiple related training distributions or …

Efficient risk-averse reinforcement learning

I Greenberg, Y Chow… - Advances in Neural …, 2022 - proceedings.neurips.cc
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the
returns. A risk measure often focuses on the worst returns out of the agent's experience. As a …

Probabilistically robust learning: Balancing average and worst-case performance

A Robey, L Chamon, GJ Pappas… - … on Machine Learning, 2022 - proceedings.mlr.press
Many of the successes of machine learning are based on minimizing an averaged loss
function. However, it is well-known that this paradigm suffers from robustness issues that …

Risk-averse offline reinforcement learning

NA Urpí, S Curi, A Krause - arxiv preprint arxiv:2102.05371, 2021 - arxiv.org
Training Reinforcement Learning (RL) agents in high-stakes applications might be too
prohibitive due to the risk associated to exploration. Thus, the agent can only use data …

Rank-based decomposable losses in machine learning: A survey

S Hu, X Wang, S Lyu - IEEE Transactions on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Recent works have revealed an essential paradigm in designing loss functions that
differentiate individual losses versus aggregate losses. The individual loss measures the …

On tilted losses in machine learning: Theory and applications

T Li, A Beirami, M Sanjabi, V Smith - Journal of Machine Learning …, 2023 - jmlr.org
Exponential tilting is a technique commonly used in fields such as statistics, probability,
information theory, and optimization to create parametric distribution shifts. Despite its …

Off-policy risk assessment in contextual bandits

A Huang, L Leqi, Z Lipton… - Advances in Neural …, 2021 - proceedings.neurips.cc
Even when unable to run experiments, practitioners can evaluate prospective policies, using
previously logged data. However, while the bandits literature has adopted a diverse set of …

A superquantile approach to federated learning with heterogeneous devices

Y Laguel, K Pillutla, J Malick… - 2021 55th Annual …, 2021 - ieeexplore.ieee.org
We present a federated learning framework that allows one to handle heterogeneous client
devices that may not conform to the population data distribution. The proposed approach …