Trustllm: Trustworthiness in large language models

Y Huang, L Sun, H Wang, S Wu, Q Zhang, Y Li… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs), exemplified by ChatGPT, have gained considerable
attention for their excellent natural language processing capabilities. Nonetheless, these …

[HTML][HTML] Position: TrustLLM: Trustworthiness in large language models

Y Huang, L Sun, H Wang, S Wu… - International …, 2024 - proceedings.mlr.press
Large language models (LLMs) have gained considerable attention for their excellent
natural language processing capabilities. Nonetheless, these LLMs present many …

On protecting the data privacy of large language models (llms): A survey

B Yan, K Li, M Xu, Y Dong, Y Zhang, Z Ren… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) are complex artificial intelligence systems capable of
understanding, generating and translating human language. They learn language patterns …

Bolt: Privacy-preserving, accurate and efficient inference for transformers

Q Pang, J Zhu, H Möllering, W Zheng… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
The advent of transformers has brought about significant advancements in traditional
machine learning tasks. However, their pervasive deployment has raised concerns about …

Privacy in large language models: Attacks, defenses and future directions

H Li, Y Chen, J Luo, J Wang, H Peng, Y Kang… - arxiv preprint arxiv …, 2023 - arxiv.org
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …

Bumblebee: Secure two-party inference framework for large transformers

W Lu, Z Huang, Z Gu, J Li, J Liu, C Hong… - Cryptology ePrint …, 2023 - eprint.iacr.org
Large transformer-based models have realized state-of-the-art performance on lots of real-
world tasks such as natural language processing and computer vision. However, with the …

Secure transformer inference made non-interactive

J Zhang, X Yang, L He, K Chen, W Lu… - Cryptology ePrint …, 2024 - eprint.iacr.org
Secure transformer inference has emerged as a prominent research topic following the
proliferation of ChatGPT. Existing solutions are typically interactive, involving substantial …

Secformer: Towards fast and accurate privacy-preserving inference for large language models

J Luo, Y Zhang, Z Zhang, J Zhang, X Mu… - arxiv preprint arxiv …, 2024 - arxiv.org
With the growing use of large language models hosted on cloud platforms to offer inference
services, privacy concerns are escalating, especially concerning sensitive data like …

Panther: Practical Secure 2-Party Neural Network Inference

J Feng, Y Wu, H Sun, S Zhang… - IEEE Transactions on …, 2025 - ieeexplore.ieee.org
Secure two-party neural network (2P-NN) inference allows the server with a neural network
model and the client with inputs to perform neural network inference without revealing their …

SecFormer: Fast and Accurate Privacy-Preserving Inference for Transformer Models via SMPC

J Luo, Y Zhang, Z Zhang, J Zhang, X Mu… - Findings of the …, 2024 - aclanthology.org
With the growing use of Transformer models hosted on cloud platforms to offer inference
services, privacy concerns are escalating, especially concerning sensitive data like …