Scissorhands: Scrub data influence via connection sensitivity in networks

J Wu, M Harandi - European Conference on Computer Vision, 2024 - Springer
Abstract Machine unlearning has become a pivotal task to erase the influence of data from a
trained model. It adheres to recent data regulation standards and enhances the privacy and …

Benchmark suites instead of leaderboards for evaluating AI fairness

A Wang, A Hertzmann, O Russakovsky - Patterns, 2024 - cell.com
Benchmarks and leaderboards are commonly used to track the fairness impacts of artificial
intelligence (AI) models. Many critics argue against this practice, since it incentivizes …

Evaluating fairness in large vision-language models across diverse demographic attributes and prompts

X Wu, Y Wang, HT Wu, Z Tao, Y Fang - arxiv preprint arxiv:2406.17974, 2024 - arxiv.org
Large vision-language models (LVLMs) have recently achieved significant progress,
demonstrating strong capabilities in open-world visual understanding. However, it is not yet …

FairDeDup: Detecting and Mitigating Vision-Language Fairness Disparities in Semantic Dataset Deduplication

E Slyman, S Lee, S Cohen… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Recent dataset deduplication techniques have demonstrated that content-aware dataset
pruning can dramatically reduce the cost of training Vision-Language Pretrained (VLP) …

Concealing Sensitive Samples against Gradient Leakage in Federated Learning

J Wu, M Hayat, M Zhou, M Harandi - … of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Federated Learning (FL) is a distributed learning paradigm that enhances users' privacy by
eliminating the need for clients to share raw, private data with the server. Despite the …

Distributionally Generative Augmentation for Fair Facial Attribute Classification

F Zhang, Q He, K Kuang, J Liu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Facial Attribute Classification (FAC) holds substantial promise in widespread
applications. However FAC models trained by traditional methodologies can be unfair by …

FADES: Fair Disentanglement with Sensitive Relevance

T Jang, X Wang - Proceedings of the IEEE/CVF Conference …, 2024 - openaccess.thecvf.com
Learning fair representation in deep learning is essential to mitigate discriminatory
outcomes and enhance trustworthiness. However previous research has been commonly …

Hairmony: Fairness-aware hairstyle classification

G Meishvili, J Clemoes, C Hewitt, Z Hosenie… - SIGGRAPH Asia 2024 …, 2024 - dl.acm.org
We present a method for prediction of a person's hairstyle from a single image. Despite
growing use cases in user digitization and enrollment for virtual experiences, available …

Beyond the surface: a global-scale analysis of visual stereotypes in text-to-image generation

A Jha, V Prabhakaran, R Denton, S Laszlo… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent studies have highlighted the issue of stereotypical depictions for people of different
identity groups in Text-to-Image (T2I) model generations. However, these existing …

Fairness in Autonomous Driving: Towards Understanding Confounding Factors in Object Detection under Challenging Weather

B Pathiraja, C Liu, R Senanayake - arxiv preprint arxiv:2406.00219, 2024 - arxiv.org
The deployment of autonomous vehicles (AVs) is rapidly expanding to numerous cities. At
the heart of AVs, the object detection module assumes a paramount role, directly influencing …