I know what you trained last summer: A survey on stealing machine learning models and defences

D Oliynyk, R Mayer, A Rauber - ACM Computing Surveys, 2023 - dl.acm.org
Machine-Learning-as-a-Service (MLaaS) has become a widespread paradigm, making
even the most complex Machine Learning models available for clients via, eg, a pay-per …

A survey of privacy attacks in machine learning

M Rigaki, S Garcia - ACM Computing Surveys, 2023 - dl.acm.org
As machine learning becomes more widely used, the need to study its implications in
security and privacy becomes more urgent. Although the body of work in privacy has been …

The false promise of imitating proprietary llms

A Gudibande, E Wallace, C Snell, X Geng, H Liu… - arxiv preprint arxiv …, 2023 - arxiv.org
An emerging method to cheaply improve a weaker language model is to finetune it on
outputs from a stronger model, such as a proprietary system like ChatGPT (eg, Alpaca, Self …

High accuracy and high fidelity extraction of neural networks

M Jagielski, N Carlini, D Berthelot, A Kurakin… - 29th USENIX security …, 2020 - usenix.org
In a model extraction attack, an adversary steals a copy of a remotely deployed machine
learning model, given oracle prediction access. We taxonomize model extraction attacks …

Protecting intellectual property of large language model-based code generation apis via watermarks

Z Li, C Wang, S Wang, C Gao - Proceedings of the 2023 ACM SIGSAC …, 2023 - dl.acm.org
The rise of large language model-based code generation (LLCG) has enabled various
commercial services and APIs. Training LLCG models is often expensive and time …

Towards data-free model stealing in a hard label setting

S Sanyal, S Addepalli, RV Babu - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abstract Machine learning models deployed as a service (MLaaS) are susceptible to model
stealing attacks, where an adversary attempts to steal the model within a restricted access …

Entangled watermarks as a defense against model extraction

H Jia, CA Choquette-Choo, V Chandrasekaran… - 30th USENIX security …, 2021 - usenix.org
Machine learning involves expensive data collection and training procedures. Model owners
may be concerned that valuable intellectual property can be leaked if adversaries mount …

Privacy side channels in machine learning systems

E Debenedetti, G Severi, N Carlini… - 33rd USENIX Security …, 2024 - usenix.org
Most current approaches for protecting privacy in machine learning (ML) assume that
models exist in a vacuum. Yet, in reality, these models are part of larger systems that include …

Data-free model extraction

JB Truong, P Maini, RJ Walls… - Proceedings of the …, 2021 - openaccess.thecvf.com
Current model extraction attacks assume that the adversary has access to a surrogate
dataset with characteristics similar to the proprietary data used to train the victim model. This …

Deepsteal: Advanced model extractions leveraging efficient weight stealing in memories

AS Rakin, MHI Chowdhuryy, F Yao… - 2022 IEEE symposium …, 2022 - ieeexplore.ieee.org
Recent advancements in Deep Neural Networks (DNNs) have enabled widespread
deployment in multiple security-sensitive domains. The need for resource-intensive training …