On the privacy and security for e-health services in the metaverse: An overview
Metaverse-enabled healthcare systems are expected to efficiently utilize an unprecedented
amount of health-related data without disclosing sensitive or private information of …
amount of health-related data without disclosing sensitive or private information of …
Glaze: Protecting artists from style mimicry by {Text-to-Image} models
Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to
displace many in the professional artist community. In particular, models can learn to mimic …
displace many in the professional artist community. In particular, models can learn to mimic …
Anti-dreambooth: Protecting users from personalized text-to-image synthesis
Text-to-image diffusion models are nothing but a revolution, allowing anyone, even without
design skills, to create realistic images from simple text inputs. With powerful personalization …
design skills, to create realistic images from simple text inputs. With powerful personalization …
Visual content privacy protection: A survey
Vision is the most important sense for people, and it is also one of the main ways of
cognition. As a result, people tend to utilize visual content to capture and share their life …
cognition. As a result, people tend to utilize visual content to capture and share their life …
Nightshade: Prompt-specific poisoning attacks on text-to-image generative models
Trained on billions of images, diffusion-based text-to-image models seem impervious to
traditional data poisoning attacks, which typically require poison samples approaching 20 …
traditional data poisoning attacks, which typically require poison samples approaching 20 …
Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer
While deep face recognition (FR) systems have shown amazing performance in
identification and verification, they also arouse privacy concerns for their excessive …
identification and verification, they also arouse privacy concerns for their excessive …
Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …
practitioners to automate and outsource the curation of training data in order to achieve state …
Adversarial examples make strong poisons
The adversarial machine learning literature is largely partitioned into evasion attacks on
testing data and poisoning attacks on training data. In this work, we show that adversarial …
testing data and poisoning attacks on training data. In this work, we show that adversarial …
Impress: Evaluating the resilience of imperceptible perturbations against unauthorized data usage in diffusion-based generative ai
Diffusion-based image generation models, such as Stable Diffusion or DALL· E 2, are able
to learn from given images and generate high-quality samples following the guidance from …
to learn from given images and generate high-quality samples following the guidance from …
On success and simplicity: A second look at transferable targeted attacks
Achieving transferability of targeted attacks is reputed to be remarkably difficult. The current
state of the art has resorted to resource-intensive solutions that necessitate training model …
state of the art has resorted to resource-intensive solutions that necessitate training model …