Towards last-layer retraining for group robustness with fewer annotations

T LaBonte, V Muthukumar… - Advances in Neural …, 2023 - proceedings.neurips.cc
Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious
correlations and poor generalization on minority groups. The recent deep feature …

Efficient Bias Mitigation Without Privileged Information

M Espinosa Zarlenga, S Sankaranarayanan… - … on Computer Vision, 2024 - Springer
Deep neural networks trained via empirical risk minimization often exhibit significant
performance disparities across groups, particularly when group and task labels are …

Do humans and machines have the same eyes? human-machine perceptual differences on image classification

M Liu, J Wei, Y Liu, J Davis - arxiv preprint arxiv:2304.08733, 2023 - arxiv.org
Trained computer vision models are assumed to solve vision tasks by imitating human
behavior learned from training labels. Most efforts in recent vision research focus on …

Amend to alignment: decoupled prompt tuning for mitigating spurious correlation in vision-language models

J Zhang, X Ma, S Guo, P Li, W Xu, X Tang… - Forty-first International …, 2024 - openreview.net
Fine-tuning the learnable prompt for a pre-trained vision-language model (VLM), such as
CLIP, has demonstrated exceptional efficiency in adapting to a broad range of downstream …