Protecting intellectual property of large language model-based code generation apis via watermarks

Z Li, C Wang, S Wang, C Gao - Proceedings of the 2023 ACM SIGSAC …, 2023 - dl.acm.org
The rise of large language model-based code generation (LLCG) has enabled various
commercial services and APIs. Training LLCG models is often expensive and time …

No privacy left outside: On the (in-) security of tee-shielded dnn partition for on-device ml

Z Zhang, C Gong, Y Cai, Y Yuan, B Liu… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
On-device ML introduces new security challenges: DNN models become white-box
accessible to device users. Based on white-box information, adversaries can conduct …

Machine learning with confidential computing: A systematization of knowledge

F Mo, Z Tarkhani, H Haddadi - ACM computing surveys, 2024 - dl.acm.org
Privacy and security challenges in Machine Learning (ML) have become increasingly
severe, along with ML's pervasive development and the recent demonstration of large attack …

Graft: Efficient inference serving for hybrid deep learning with SLO guarantees via DNN re-alignment

J Wu, L Wang, Q **, F Liu - IEEE Transactions on Parallel and …, 2023 - ieeexplore.ieee.org
Deep neural networks (DNNs) have been widely adopted for various mobile inference tasks,
yet their ever-increasing computational demands are hindering their deployment on …

FedSlice: Protecting Federated Learning Models from Malicious Participants with Model Slicing

Z Zhang, Y Li, B Liu, Y Cai, D Li… - 2023 IEEE/ACM 45th …, 2023 - ieeexplore.ieee.org
Crowdsourcing Federated learning (CFL) is a new crowdsourcing development paradigm
for the Deep Neural Network (DNN) models, also called “software 2.0”. In practice, the …

Silent guardian: Protecting text from malicious exploitation by large language models

J Zhao, K Chen, X Yuan, Y Qi… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The rapid development of large language models (LLMs) has yielded impressive success in
various downstream tasks. However, the vast potential and remarkable capabilities of LLMs …

[PDF][PDF] CipherSteal: Stealing Input Data from TEE-Shielded Neural Networks with Ciphertext Side Channels

Y Yuan, Z Liu, S Deng, Y Chen… - … on Security and …, 2024 - yuanyuan-yuan.github.io
Shielding neural networks (NNs) from untrusted hosts with Trusted Execution Environments
(TEEs) has been increasingly adopted. Nevertheless, this paper shows that the …

Qdrl: Queue-aware online drl for computation offloading in industrial internet of things

A Xu, Z Hu, X Zhang, H **ao, H Zheng… - IEEE Internet of …, 2023 - ieeexplore.ieee.org
Recently, the Industrial Internet of Things (IIoT) has shown great application value in
environmental monitoring. However, it suffers from serious bottlenecks in energy and …

CRONUS: Fault-isolated, secure and high-performance heterogeneous computing for trusted execution environment

J Jiang, J Qi, T Shen, X Chen, S Zhao… - 2022 55th IEEE/ACM …, 2022 - ieeexplore.ieee.org
With the trend of processing a large volume of sensitive data on PaaS services (eg, DNN
training), a TEE architecture that supports general heterogeneous accelerators, enables …

TransLinkGuard: Safeguarding Transformer Models Against Model Stealing in Edge Deployment

Q Li, Z Shen, Z Qin, Y **e, X Zhang, T Du… - Proceedings of the …, 2024 - dl.acm.org
Proprietary large language models (LLMs) have been widely applied in various scenarios.
Additionally, deploying LLMs on edge devices is trending for efficiency and privacy reasons …