Scissorhands: Scrub data influence via connection sensitivity in networks
Abstract Machine unlearning has become a pivotal task to erase the influence of data from a
trained model. It adheres to recent data regulation standards and enhances the privacy and …
trained model. It adheres to recent data regulation standards and enhances the privacy and …
Benchmark suites instead of leaderboards for evaluating AI fairness
Benchmarks and leaderboards are commonly used to track the fairness impacts of artificial
intelligence (AI) models. Many critics argue against this practice, since it incentivizes …
intelligence (AI) models. Many critics argue against this practice, since it incentivizes …
Evaluating fairness in large vision-language models across diverse demographic attributes and prompts
Large vision-language models (LVLMs) have recently achieved significant progress,
demonstrating strong capabilities in open-world visual understanding. However, it is not yet …
demonstrating strong capabilities in open-world visual understanding. However, it is not yet …
FairDeDup: Detecting and Mitigating Vision-Language Fairness Disparities in Semantic Dataset Deduplication
Recent dataset deduplication techniques have demonstrated that content-aware dataset
pruning can dramatically reduce the cost of training Vision-Language Pretrained (VLP) …
pruning can dramatically reduce the cost of training Vision-Language Pretrained (VLP) …
Concealing Sensitive Samples against Gradient Leakage in Federated Learning
Federated Learning (FL) is a distributed learning paradigm that enhances users' privacy by
eliminating the need for clients to share raw, private data with the server. Despite the …
eliminating the need for clients to share raw, private data with the server. Despite the …
Distributionally Generative Augmentation for Fair Facial Attribute Classification
Abstract Facial Attribute Classification (FAC) holds substantial promise in widespread
applications. However FAC models trained by traditional methodologies can be unfair by …
applications. However FAC models trained by traditional methodologies can be unfair by …
FADES: Fair Disentanglement with Sensitive Relevance
Learning fair representation in deep learning is essential to mitigate discriminatory
outcomes and enhance trustworthiness. However previous research has been commonly …
outcomes and enhance trustworthiness. However previous research has been commonly …
Hairmony: Fairness-aware hairstyle classification
We present a method for prediction of a person's hairstyle from a single image. Despite
growing use cases in user digitization and enrollment for virtual experiences, available …
growing use cases in user digitization and enrollment for virtual experiences, available …
Beyond the surface: a global-scale analysis of visual stereotypes in text-to-image generation
Recent studies have highlighted the issue of stereotypical depictions for people of different
identity groups in Text-to-Image (T2I) model generations. However, these existing …
identity groups in Text-to-Image (T2I) model generations. However, these existing …
Fairness in Autonomous Driving: Towards Understanding Confounding Factors in Object Detection under Challenging Weather
The deployment of autonomous vehicles (AVs) is rapidly expanding to numerous cities. At
the heart of AVs, the object detection module assumes a paramount role, directly influencing …
the heart of AVs, the object detection module assumes a paramount role, directly influencing …