Intel software guard extensions applications: A survey

NC Will, CA Maziero - ACM Computing Surveys, 2023 - dl.acm.org
Data confidentiality is a central concern in modern computer systems and services, as
sensitive data from users and companies are being increasingly delegated to such systems …

No privacy left outside: On the (in-) security of tee-shielded dnn partition for on-device ml

Z Zhang, C Gong, Y Cai, Y Yuan, B Liu… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
On-device ML introduces new security challenges: DNN models become white-box
accessible to device users. Based on white-box information, adversaries can conduct …

Fuzzing sgx enclaves via host program mutations

A Khan, M Zou, K Kim, D Xu, A Bianchi… - 2023 IEEE 8th …, 2023 - ieeexplore.ieee.org
Intel Software Guard eXtension (SGX) is the cornerstone of Confidential Computing,
enabling runtime code and data integrity and confidentiality via enclaves. Unfortunately …

[PDF][PDF] Does federated learning really need backpropagation

H Feng, T Pang, C Du, W Chen, S Yan… - arxiv preprint arxiv …, 2023 - researchgate.net
Federated learning (FL) is a general principle for decentralized clients to train a server
model collectively without sharing local data. FL is a promising framework with practical …

Guardnn: secure accelerator architecture for privacy-preserving deep learning

W Hua, M Umar, Z Zhang, GE Suh - Proceedings of the 59th ACM/IEEE …, 2022 - dl.acm.org
This paper proposes GuardNN, a secure DNN accelerator that provides hardware-based
protection for user data and model parameters even in an untrusted environment. GuardNN …

HyperTheft: Thieving Model Weights from TEE-Shielded Neural Networks via Ciphertext Side Channels

Y Yuan, Z Liu, S Deng, Y Chen, S Wang… - Proceedings of the …, 2024 - dl.acm.org
Trusted execution environments (TEEs) are widely employed to protect deep neural
networks (DNNs) from untrusted hosts (eg, hypervisors). By shielding DNNs as fully black …

[PDF][PDF] CipherSteal: Stealing Input Data from TEE-Shielded Neural Networks with Ciphertext Side Channels

Y Yuan, Z Liu, S Deng, Y Chen… - … on Security and …, 2024 - yuanyuan-yuan.github.io
Shielding neural networks (NNs) from untrusted hosts with Trusted Execution Environments
(TEEs) has been increasingly adopted. Nevertheless, this paper shows that the …

Stealthy backdoors as compression artifacts

Y Tian, F Suya, F Xu, D Evans - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Model compression is a widely-used approach for reducing the size of deep learning
models without much accuracy loss, enabling resource-hungry models to be compressed for …

TEE-based decentralized recommender systems: The raw data sharing redemption

A Dhasade, N Dresevic… - 2022 IEEE …, 2022 - ieeexplore.ieee.org
Recommenders are central in many applications today. The most effective recommendation
schemes, such as those based on collaborative filtering (CF), exploit similarities between …

GuardNN: Secure Accelerator Architecture for Privacy-Preserving Deep Learning

W Hua, M Umar, Z Zhang, GE Suh - arxiv preprint arxiv:2008.11632, 2020 - arxiv.org
This paper proposes GuardNN, a secure DNN accelerator that provides hardware-based
protection for user data and model parameters even in an untrusted environment. GuardNN …