Distribution preserving backdoor attack in self-supervised learning

G Tao, Z Wang, S Feng, G Shen, S Ma… - 2024 IEEE Symposium …, 2024 - ieeexplore.ieee.org
Self-supervised learning is widely used in various domains for building foundation models. It
has been demonstrated to achieve state-of-the-art performance in a range of tasks. In the …

Prompt Stealing Attacks Against {Text-to-Image} Generation Models

X Shen, Y Qu, M Backes, Y Zhang - 33rd USENIX Security Symposium …, 2024 - usenix.org
Text-to-Image generation models have revolutionized the artwork design process and
enabled anyone to create high-quality images by entering text descriptions called prompts …

Fine-tuning is all you need to mitigate backdoor attacks

Z Sha, X He, P Berrang, M Humbert… - arxiv preprint arxiv …, 2022 - arxiv.org
Backdoor attacks represent one of the major threats to machine learning models. Various
efforts have been made to mitigate backdoors. However, existing defenses have become …

Transtroj: Transferable backdoor attacks to pre-trained models via embedding indistinguishability

H Wang, T **ang, S Guo, J He, H Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
Pre-trained models (PTMs) are extensively utilized in various downstream tasks. Adopting
untrusted PTMs may suffer from backdoor attacks, where the adversary can compromise the …

Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIP

R **, CY Huang, C You, X Li - 2024 IEEE Conference on …, 2024 - ieeexplore.ieee.org
In recent years, foundation models (FMs) have contributed heavily to the advancements in
the deep learning domain. By extracting intricate patterns from vast datasets, these models …

An empirical study of backdoor attacks on masked auto encoders

S Zhuang, P **a, B Li - ICASSP 2023-2023 IEEE International …, 2023 - ieeexplore.ieee.org
Large-scale unlabeled data has spurred recent progress in self-supervised learning
methods for learning rich visual representations. Masked autoencoders (MAE), a recently …

Model Supply Chain Poisoning: Backdooring Pre-trained Models via Embedding Indistinguishability

H Wang, S Guo, J He, H Liu, T Zhang… - THE WEB CONFERENCE … - openreview.net
Pre-trained models (PTMs) are widely adopted across various downstream tasks in the
machine learning supply chain. Adopting untrustworthy PTMs introduces significant security …

[PDF][PDF] Prompt Stealing Attacks Against Text-to-Image Generation Models

XSYQM Backes, Y Zhang - usenix.org
Text-to-Image generation models have revolutionized the artwork design process and
enabled anyone to create high-quality images by entering text descriptions called prompts …

Backdoor Attack on Un-paired Medical Image-Text Pretrained Models: A Pilot Study on MedCLIP

R **, CY Huang, C You, X Li - 2nd IEEE Conference on Secure and … - openreview.net
In recent years, foundation models (FMs) have solidified their role as cornerstone
advancements in the deep learning domain. By extracting intricate patterns from vast …