Practical membership inference attacks against large-scale multi-modal models: A pilot study

M Ko, M **, C Wang, R Jia - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Membership inference attacks (MIAs) aim to infer whether a data point has been used to
train a machine learning model. These attacks can be employed to identify potential privacy …

[HTML][HTML] A survey on membership inference attacks and defenses in Machine Learning

J Niu, P Liu, X Zhu, K Shen, Y Wang, H Chi… - Journal of Information …, 2024 - Elsevier
Membership inference (MI) attacks mainly aim to infer whether a data record was used to
train a target model or not. Due to the serious privacy risks, MI attacks have been attracting a …

Source inference attacks: Beyond membership inference attacks in federated learning

H Hu, X Zhang, Z Salcic, L Sun… - … on Dependable and …, 2023 - ieeexplore.ieee.org
Federated learning (FL) is a popular approach to facilitate privacy-aware machine learning
since it allows multiple clients to collaboratively train a global model without granting others …

Privacy-aware document visual question answering

R Tito, K Nguyen, M Tobaben, R Kerkouche… - … on Document Analysis …, 2024 - Springer
Abstract Document Visual Question Answering (DocVQA) has quickly grown into a central
task of document understanding. But despite the fact that documents contain sensitive or …

A unified membership inference method for visual self-supervised encoder via part-aware capability

J Zhu, J Zha, D Li, L Wang - Proceedings of the 2024 on ACM SIGSAC …, 2024 - dl.acm.org
Self-supervised learning shows promise in harnessing extensive unlabeled data, but it also
confronts significant privacy concerns, especially in vision. In this paper, we aim to perform …

Extracting training data from document-based VQA models

F Pinto, N Rauschmayr, F Tramèr, P Torr… - arxiv preprint arxiv …, 2024 - arxiv.org
Vision-Language Models (VLMs) have made remarkable progress in document-based
Visual Question Answering (ie, responding to queries about the contents of an input …

Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression against Heterogeneous Attacks Toward AI Software Deployment

J Zhu, L Wang, X Han, A Liu… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The size of deep learning models in artificial intelligence (AI) software is increasing rapidly,
hindering the large-scale deployment on resource-restricted devices (eg., smartphones). To …

[PDF][PDF] Quantifying privacy risks of prompts in visual prompt learning

Y Wu, R Wen, M Backes, P Berrang, M Humbert… - 2024 - usenix.org
Large-scale pre-trained models are increasingly adapted to downstream tasks through a
new paradigm called prompt learning. In contrast to fine-tuning, prompt learning does not …

RAI4IoE: Responsible AI for enabling the Internet of Energy

M Xue, S Nepal, L Liu… - 2023 5th IEEE …, 2023 - ieeexplore.ieee.org
This paper plans to develop an Equitable and Re-sponsible AI framework with enabling
techniques and algorithms for the Internet of Energy (IoE), in short, RAI4IoE. The energy …

Membership Inference Attacks against Large Vision-Language Models

Z Li, Y Wu, Y Chen, F Tonin, EA Rocamora… - arxiv preprint arxiv …, 2024 - arxiv.org
Large vision-language models (VLLMs) exhibit promising capabilities for processing multi-
modal tasks across various application scenarios. However, their emergence also raises …