Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
I know what you trained last summer: A survey on stealing machine learning models and defences
Machine-Learning-as-a-Service (MLaaS) has become a widespread paradigm, making
even the most complex Machine Learning models available for clients via, eg, a pay-per …
even the most complex Machine Learning models available for clients via, eg, a pay-per …
Stealing part of a production language model
We introduce the first model-stealing attack that extracts precise, nontrivial information from
black-box production language models like OpenAI's ChatGPT or Google's PaLM-2 …
black-box production language models like OpenAI's ChatGPT or Google's PaLM-2 …
Privacy side channels in machine learning systems
Most current approaches for protecting privacy in machine learning (ML) assume that
models exist in a vacuum. Yet, in reality, these models are part of larger systems that include …
models exist in a vacuum. Yet, in reality, these models are part of larger systems that include …
An Overview of Trustworthy AI: Advances in IP Protection, Privacy-preserving Federated Learning, Security Verification, and GAI Safety Alignment
AI has undergone a remarkable evolution journey marked by groundbreaking milestones.
Like any powerful tool, it can be turned into a weapon for devastation in the wrong hands …
Like any powerful tool, it can be turned into a weapon for devastation in the wrong hands …
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack
P Karmakar, D Basu - Advances in Neural Information …, 2024 - proceedings.neurips.cc
We study design of black-box model extraction attacks that can* send minimal number of
queries from* a* publicly available dataset* to a target ML model through a predictive API …
queries from* a* publicly available dataset* to a target ML model through a predictive API …
[PDF][PDF] Modelguard: Information-theoretic defense against model extraction attacks
Malicious utilization of a query interface can compromise the confidentiality of ML-as-a-
Service (MLaaS) systems via model extraction attacks. Previous studies have proposed to …
Service (MLaaS) systems via model extraction attacks. Previous studies have proposed to …
Malprotect: Stateful defense against adversarial query attacks in ml-based malware detection
ML models are known to be vulnerable to adversarial query attacks. In these attacks, queries
are iteratively perturbed towards a particular class without any knowledge of the target …
are iteratively perturbed towards a particular class without any knowledge of the target …
A comprehensive survey of attack techniques, implementation, and mitigation strategies in large language models
Ensuring the security of large language models (LLMs) is an ongoing challenge despite
their widespread popularity. Developers work to enhance LLMs security, but vulnerabilities …
their widespread popularity. Developers work to enhance LLMs security, but vulnerabilities …
Fdinet: Protecting against dnn model extraction via feature distortion index
Machine Learning as a Service (MLaaS) platforms have gained popularity due to their
accessibility, cost-efficiency, scalability, and rapid development capabilities. However …
accessibility, cost-efficiency, scalability, and rapid development capabilities. However …
SeInspect: Defending model stealing via heterogeneous semantic inspection
X Liu, Z Ma, Y Liu, Z Qin, J Zhang, Z Wang - European Symposium on …, 2022 - Springer
Recent works developed an emerging attack, called Model Stealing (MS), to steal the
functionalities of remote models, rendering the privacy of cloud-based machine learning …
functionalities of remote models, rendering the privacy of cloud-based machine learning …