Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Privacy in large language models: Attacks, defenses and future directions
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …
effectively tackle various downstream NLP tasks and unify these tasks into generative …
Sigma: Secure gpt inference with function secret sharing
Abstract Secure 2-party computation (2PC) enables secure inference that offers protection
for both proprietary machine learning (ML) models and sensitive inputs to them. However …
for both proprietary machine learning (ML) models and sensitive inputs to them. However …
Ciphergpt: Secure two-party gpt inference
ChatGPT is recognized as a significant revolution in the field of artificial intelligence, but it
raises serious concerns regarding user privacy, as the data submitted by users may contain …
raises serious concerns regarding user privacy, as the data submitted by users may contain …
Bumblebee: Secure two-party inference framework for large transformers
Large transformer-based models have realized state-of-the-art performance on lots of real-
world tasks such as natural language processing and computer vision. However, with the …
world tasks such as natural language processing and computer vision. However, with the …
Secure transformer inference made non-interactive
Secure transformer inference has emerged as a prominent research topic following the
proliferation of ChatGPT. Existing solutions are typically interactive, involving substantial …
proliferation of ChatGPT. Existing solutions are typically interactive, involving substantial …
Secformer: Towards fast and accurate privacy-preserving inference for large language models
With the growing use of large language models hosted on cloud platforms to offer inference
services, privacy concerns are escalating, especially concerning sensitive data like …
services, privacy concerns are escalating, especially concerning sensitive data like …
Panther: Practical Secure 2-Party Neural Network Inference
Secure two-party neural network (2P-NN) inference allows the server with a neural network
model and the client with inputs to perform neural network inference without revealing their …
model and the client with inputs to perform neural network inference without revealing their …
Rhombus: Fast Homomorphic Matrix-Vector Multiplication for Secure Two-Party Inference
J He, K Yang, G Tang, Z Huang, L Lin, C Wei… - Proceedings of the …, 2024 - dl.acm.org
We present Rhombus, a new secure matrix-vector multiplication (MVM) protocol in the semi-
honest two-party setting, which is able to be seamlessly integrated into existing privacy …
honest two-party setting, which is able to be seamlessly integrated into existing privacy …
Mpc-minimized secure llm inference
Many inference services based on large language models (LLMs) pose a privacy concern,
either revealing user prompts to the service or the proprietary weights to the user. Secure …
either revealing user prompts to the service or the proprietary weights to the user. Secure …
Privcirnet: Efficient private inference via block circulant transformation
Homomorphic encryption (HE)-based deep neural network (DNN) inference protects data
and model privacy but suffers from significant computation overhead. We observe …
and model privacy but suffers from significant computation overhead. We observe …