Decision-focused learning: Foundations, state of the art, benchmark and future opportunities

J Mandi, J Kotary, S Berden, M Mulamba… - Journal of Artificial …, 2024 - jair.org
Decision-focused learning (DFL) is an emerging paradigm that integrates machine learning
(ML) and constrained optimization to enhance decision quality by training ML models in an …

[HTML][HTML] Distributionally robust optimization: A review on theory and applications

F Lin, X Fang, Z Gao - Numerical Algebra, Control and Optimization, 2022 - aimsciences.org
In this paper, we survey the primary research on the theory and applications of
distributionally robust optimization (DRO). We start with reviewing the modeling power and …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arxiv preprint arxiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Discover and cure: Concept-aware mitigation of spurious correlation

S Wu, M Yuksekgonul, L Zhang… - … Conference on Machine …, 2023 - proceedings.mlr.press
Deep neural networks often rely on spurious correlations to make predictions, which hinders
generalization beyond training environments. For instance, models that associate cats with …

On the need for a language describing distribution shifts: Illustrations on tabular datasets

J Liu, T Wang, P Cui… - Advances in Neural …, 2024 - proceedings.neurips.cc
Different distribution shifts require different algorithmic and operational interventions.
Methodological research must be grounded by the specific shifts they address. Although …

No subclass left behind: Fine-grained robustness in coarse-grained classification problems

N Sohoni, J Dunnmon, G Angus… - Advances in Neural …, 2020 - proceedings.neurips.cc
In real-world classification tasks, each class often comprises multiple finer-grained"
subclasses." As the subclass labels are frequently unavailable, models trained using only …

Dog is sgd's best friend: A parameter-free dynamic step size schedule

M Ivgi, O Hinder, Y Carmon - International Conference on …, 2023 - proceedings.mlr.press
We propose a tuning-free dynamic SGD step size formula, which we call Distance over
Gradients (DoG). The DoG step sizes depend on simple empirical quantities (distance from …

Correct-n-contrast: A contrastive approach for improving robustness to spurious correlations

M Zhang, NS Sohoni, HR Zhang, C Finn… - arxiv preprint arxiv …, 2022 - arxiv.org
Spurious correlations pose a major challenge for robust machine learning. Models trained
with empirical risk minimization (ERM) may learn to rely on correlations between class …

Spurious correlations in machine learning: A survey

W Ye, G Zheng, X Cao, Y Ma, A Zhang - arxiv preprint arxiv:2402.12715, 2024 - arxiv.org
Machine learning systems are known to be sensitive to spurious correlations between non-
essential features of the inputs (eg, background, texture, and secondary objects) and the …

Preserving fairness generalization in deepfake detection

L Lin, X He, Y Ju, X Wang, F Ding… - Proceedings of the …, 2024 - openaccess.thecvf.com
Although effective deepfake detection models have been developed in recent years recent
studies have revealed that these models can result in unfair performance disparities among …