Practical membership inference attacks against large-scale multi-modal models: A pilot study
Membership inference attacks (MIAs) aim to infer whether a data point has been used to
train a machine learning model. These attacks can be employed to identify potential privacy …
train a machine learning model. These attacks can be employed to identify potential privacy …
[HTML][HTML] A survey on membership inference attacks and defenses in Machine Learning
Membership inference (MI) attacks mainly aim to infer whether a data record was used to
train a target model or not. Due to the serious privacy risks, MI attacks have been attracting a …
train a target model or not. Due to the serious privacy risks, MI attacks have been attracting a …
Source inference attacks: Beyond membership inference attacks in federated learning
Federated learning (FL) is a popular approach to facilitate privacy-aware machine learning
since it allows multiple clients to collaboratively train a global model without granting others …
since it allows multiple clients to collaboratively train a global model without granting others …
Privacy-aware document visual question answering
Abstract Document Visual Question Answering (DocVQA) has quickly grown into a central
task of document understanding. But despite the fact that documents contain sensitive or …
task of document understanding. But despite the fact that documents contain sensitive or …
A unified membership inference method for visual self-supervised encoder via part-aware capability
Self-supervised learning shows promise in harnessing extensive unlabeled data, but it also
confronts significant privacy concerns, especially in vision. In this paper, we aim to perform …
confronts significant privacy concerns, especially in vision. In this paper, we aim to perform …
Extracting training data from document-based VQA models
Vision-Language Models (VLMs) have made remarkable progress in document-based
Visual Question Answering (ie, responding to queries about the contents of an input …
Visual Question Answering (ie, responding to queries about the contents of an input …
Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression against Heterogeneous Attacks Toward AI Software Deployment
The size of deep learning models in artificial intelligence (AI) software is increasing rapidly,
hindering the large-scale deployment on resource-restricted devices (eg., smartphones). To …
hindering the large-scale deployment on resource-restricted devices (eg., smartphones). To …
[PDF][PDF] Quantifying privacy risks of prompts in visual prompt learning
Large-scale pre-trained models are increasingly adapted to downstream tasks through a
new paradigm called prompt learning. In contrast to fine-tuning, prompt learning does not …
new paradigm called prompt learning. In contrast to fine-tuning, prompt learning does not …
RAI4IoE: Responsible AI for enabling the Internet of Energy
This paper plans to develop an Equitable and Re-sponsible AI framework with enabling
techniques and algorithms for the Internet of Energy (IoE), in short, RAI4IoE. The energy …
techniques and algorithms for the Internet of Energy (IoE), in short, RAI4IoE. The energy …
Membership Inference Attacks against Large Vision-Language Models
Large vision-language models (VLLMs) exhibit promising capabilities for processing multi-
modal tasks across various application scenarios. However, their emergence also raises …
modal tasks across various application scenarios. However, their emergence also raises …