Impress: Evaluating the resilience of imperceptible perturbations against unauthorized data usage in diffusion-based generative ai

B Cao, C Li, T Wang, J Jia, B Li… - Advances in Neural …, 2023 - proceedings.neurips.cc
Diffusion-based image generation models, such as Stable Diffusion or DALL· E 2, are able
to learn from given images and generate high-quality samples following the guidance from …

" Get in Researchers; We're Measuring Reproducibility": A Reproducibility Study of Machine Learning Papers in Tier 1 Security Conferences

D Olszewski, A Lu, C Stillman, K Warren… - Proceedings of the …, 2023 - dl.acm.org
Reproducibility is crucial to the advancement of science; it strengthens confidence in
seemingly contradictory results and expands the boundaries of known discoveries …

Rethinking the invisible protection against unauthorized image usage in stable diffusion

S An, L Yan, S Cheng, G Shen, K Zhang, Q Xu… - 33rd USENIX Security …, 2024 - usenix.org
Advancements in generative AI models like Stable Diffusion, DALL· E 2, and Midjourney
have revolutionized digital creativity, enabling the generation of authentic-looking images …

Challenges and remedies to privacy and security in aigc: Exploring the potential of privacy computing, blockchain, and beyond

C Chen, Z Wu, Y Lai, W Ou, T Liao, Z Zheng - arxiv preprint arxiv …, 2023 - arxiv.org
Artificial Intelligence Generated Content (AIGC) is one of the latest achievements in AI
development. The content generated by related applications, such as text, images and …

An Analysis of Recent Advances in Deepfake Image Detection in an Evolving Threat Landscape

SM Abdullah, A Cheruvu, S Kanchi, T Chung… - arxiv preprint arxiv …, 2024 - arxiv.org
Deepfake or synthetic images produced using deep generative models pose serious risks to
online platforms. This has triggered several research efforts to accurately detect deepfake …

Model extraction attacks revisited

J Liang, R Pang, C Li, T Wang - Proceedings of the 19th ACM Asia …, 2024 - dl.acm.org
Model extraction (ME) attacks represent one major threat to Machine-Learning-as-a-Service
(MLaaS) platforms by" stealing" the functionality of confidential machine-learning models …

[PDF][PDF] Digital and physical face attacks: Reviewing and one step further

C Kong, S Wang, H Li - APSIPA Transactions on Signal and …, 2022 - nowpublishers.com
With the rapid progress over the past five years, face authentication has become the most
pervasive biometric recognition method. Thanks to the high-accuracy recognition …

GOTCHA: real-time video deepfake detection via challenge-response

G Mittal, C Hegde, N Memon - 2024 IEEE 9th European …, 2024 - ieeexplore.ieee.org
With the rise of AI-enabled Real-Time Deepfakes (RTDFs), the integrity of online video
interactions has become a growing concern. RTDFs have now made it feasible to replace an …

Understanding the (in) security of cross-side face verification systems in mobile apps: a system perspective

X Zhang, H Ye, Z Huang, X Ye, Y Cao… - … IEEE Symposium on …, 2023 - ieeexplore.ieee.org
Face Verification Systems (FVSes) are more and more deployed by real-world mobile
applications (apps) to verify a human's claimed identity. One popular type of FVSes is called …

Sok: Facial deepfake detectors

BM Le, J Kim, S Tariq, K Moore, A Abuadbba… - arxiv preprint arxiv …, 2024 - arxiv.org
Deepfakes have rapidly emerged as a profound and serious threat to society, primarily due
to their ease of creation and dissemination. This situation has triggered an accelerated …