On the privacy and security for e-health services in the metaverse: An overview

M Letafati, S Otoum - Ad Hoc Networks, 2023‏ - Elsevier
Metaverse-enabled healthcare systems are expected to efficiently utilize an unprecedented
amount of health-related data without disclosing sensitive or private information of …

Glaze: Protecting artists from style mimicry by {Text-to-Image} models

S Shan, J Cryan, E Wenger, H Zheng… - 32nd USENIX Security …, 2023‏ - usenix.org
Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to
displace many in the professional artist community. In particular, models can learn to mimic …

Anti-dreambooth: Protecting users from personalized text-to-image synthesis

T Van Le, H Phung, TH Nguyen… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
Text-to-image diffusion models are nothing but a revolution, allowing anyone, even without
design skills, to create realistic images from simple text inputs. With powerful personalization …

Visual content privacy protection: A survey

R Zhao, Y Zhang, T Wang, W Wen, Y **ang… - ACM Computing …, 2025‏ - dl.acm.org
Vision is the most important sense for people, and it is also one of the main ways of
cognition. As a result, people tend to utilize visual content to capture and share their life …

Nightshade: Prompt-specific poisoning attacks on text-to-image generative models

S Shan, W Ding, J Passananti, S Wu… - … IEEE Symposium on …, 2024‏ - ieeexplore.ieee.org
Trained on billions of images, diffusion-based text-to-image models seem impervious to
traditional data poisoning attacks, which typically require poison samples approaching 20 …

Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer

S Hu, X Liu, Y Zhang, M Li… - Proceedings of the …, 2022‏ - openaccess.thecvf.com
While deep face recognition (FR) systems have shown amazing performance in
identification and verification, they also arouse privacy concerns for their excessive …

Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses

M Goldblum, D Tsipras, C **e, X Chen… - … on Pattern Analysis …, 2022‏ - ieeexplore.ieee.org
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …

Adversarial examples make strong poisons

L Fowl, M Goldblum, P Chiang… - Advances in …, 2021‏ - proceedings.neurips.cc
The adversarial machine learning literature is largely partitioned into evasion attacks on
testing data and poisoning attacks on training data. In this work, we show that adversarial …

Impress: Evaluating the resilience of imperceptible perturbations against unauthorized data usage in diffusion-based generative ai

B Cao, C Li, T Wang, J Jia, B Li… - Advances in Neural …, 2023‏ - proceedings.neurips.cc
Diffusion-based image generation models, such as Stable Diffusion or DALL· E 2, are able
to learn from given images and generate high-quality samples following the guidance from …

On success and simplicity: A second look at transferable targeted attacks

Z Zhao, Z Liu, M Larson - Advances in Neural Information …, 2021‏ - proceedings.neurips.cc
Achieving transferability of targeted attacks is reputed to be remarkably difficult. The current
state of the art has resorted to resource-intensive solutions that necessitate training model …