Large-scale methods for distributionally robust optimization
We propose and analyze algorithms for distributionally robust optimization of convex losses
with conditional value at risk (CVaR) and $\chi^ 2$ divergence uncertainty sets. We prove …
with conditional value at risk (CVaR) and $\chi^ 2$ divergence uncertainty sets. We prove …
Correct-n-contrast: A contrastive approach for improving robustness to spurious correlations
Spurious correlations pose a major challenge for robust machine learning. Models trained
with empirical risk minimization (ERM) may learn to rely on correlations between class …
with empirical risk minimization (ERM) may learn to rely on correlations between class …
Probable domain generalization via quantile risk minimization
Abstract Domain generalization (DG) seeks predictors which perform well on unseen test
distributions by leveraging data drawn from multiple related training distributions or …
distributions by leveraging data drawn from multiple related training distributions or …
Efficient risk-averse reinforcement learning
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the
returns. A risk measure often focuses on the worst returns out of the agent's experience. As a …
returns. A risk measure often focuses on the worst returns out of the agent's experience. As a …
Probabilistically robust learning: Balancing average and worst-case performance
Many of the successes of machine learning are based on minimizing an averaged loss
function. However, it is well-known that this paradigm suffers from robustness issues that …
function. However, it is well-known that this paradigm suffers from robustness issues that …
Risk-averse offline reinforcement learning
Training Reinforcement Learning (RL) agents in high-stakes applications might be too
prohibitive due to the risk associated to exploration. Thus, the agent can only use data …
prohibitive due to the risk associated to exploration. Thus, the agent can only use data …
Rank-based decomposable losses in machine learning: A survey
Recent works have revealed an essential paradigm in designing loss functions that
differentiate individual losses versus aggregate losses. The individual loss measures the …
differentiate individual losses versus aggregate losses. The individual loss measures the …
On tilted losses in machine learning: Theory and applications
Exponential tilting is a technique commonly used in fields such as statistics, probability,
information theory, and optimization to create parametric distribution shifts. Despite its …
information theory, and optimization to create parametric distribution shifts. Despite its …
Off-policy risk assessment in contextual bandits
Even when unable to run experiments, practitioners can evaluate prospective policies, using
previously logged data. However, while the bandits literature has adopted a diverse set of …
previously logged data. However, while the bandits literature has adopted a diverse set of …
A superquantile approach to federated learning with heterogeneous devices
We present a federated learning framework that allows one to handle heterogeneous client
devices that may not conform to the population data distribution. The proposed approach …
devices that may not conform to the population data distribution. The proposed approach …