Protecting intellectual property of large language model-based code generation apis via watermarks
The rise of large language model-based code generation (LLCG) has enabled various
commercial services and APIs. Training LLCG models is often expensive and time …
commercial services and APIs. Training LLCG models is often expensive and time …
No privacy left outside: On the (in-) security of tee-shielded dnn partition for on-device ml
On-device ML introduces new security challenges: DNN models become white-box
accessible to device users. Based on white-box information, adversaries can conduct …
accessible to device users. Based on white-box information, adversaries can conduct …
Machine learning with confidential computing: A systematization of knowledge
Privacy and security challenges in Machine Learning (ML) have become increasingly
severe, along with ML's pervasive development and the recent demonstration of large attack …
severe, along with ML's pervasive development and the recent demonstration of large attack …
Graft: Efficient inference serving for hybrid deep learning with SLO guarantees via DNN re-alignment
Deep neural networks (DNNs) have been widely adopted for various mobile inference tasks,
yet their ever-increasing computational demands are hindering their deployment on …
yet their ever-increasing computational demands are hindering their deployment on …
FedSlice: Protecting Federated Learning Models from Malicious Participants with Model Slicing
Crowdsourcing Federated learning (CFL) is a new crowdsourcing development paradigm
for the Deep Neural Network (DNN) models, also called “software 2.0”. In practice, the …
for the Deep Neural Network (DNN) models, also called “software 2.0”. In practice, the …
Silent guardian: Protecting text from malicious exploitation by large language models
The rapid development of large language models (LLMs) has yielded impressive success in
various downstream tasks. However, the vast potential and remarkable capabilities of LLMs …
various downstream tasks. However, the vast potential and remarkable capabilities of LLMs …
[PDF][PDF] CipherSteal: Stealing Input Data from TEE-Shielded Neural Networks with Ciphertext Side Channels
Y Yuan, Z Liu, S Deng, Y Chen… - … on Security and …, 2024 - yuanyuan-yuan.github.io
Shielding neural networks (NNs) from untrusted hosts with Trusted Execution Environments
(TEEs) has been increasingly adopted. Nevertheless, this paper shows that the …
(TEEs) has been increasingly adopted. Nevertheless, this paper shows that the …
Qdrl: Queue-aware online drl for computation offloading in industrial internet of things
A Xu, Z Hu, X Zhang, H **ao, H Zheng… - IEEE Internet of …, 2023 - ieeexplore.ieee.org
Recently, the Industrial Internet of Things (IIoT) has shown great application value in
environmental monitoring. However, it suffers from serious bottlenecks in energy and …
environmental monitoring. However, it suffers from serious bottlenecks in energy and …
CRONUS: Fault-isolated, secure and high-performance heterogeneous computing for trusted execution environment
With the trend of processing a large volume of sensitive data on PaaS services (eg, DNN
training), a TEE architecture that supports general heterogeneous accelerators, enables …
training), a TEE architecture that supports general heterogeneous accelerators, enables …
TransLinkGuard: Safeguarding Transformer Models Against Model Stealing in Edge Deployment
Proprietary large language models (LLMs) have been widely applied in various scenarios.
Additionally, deploying LLMs on edge devices is trending for efficiency and privacy reasons …
Additionally, deploying LLMs on edge devices is trending for efficiency and privacy reasons …