Следене
Haoqi Wu
Haoqi Wu
Tiktok
Потвърден имейл адрес: fudan.edu.cn
Заглавие
Позовавания
Позовавания
Година
Puma: Secure inference of llama-7b in five minutes
Y Dong, W Lu, Y Zheng, H Wu, D Zhao, J Tan, Z Huang, C Hong, T Wei, ...
arXiv preprint arXiv:2307.12533, 2023
532023
{SecretFlow-SPU}: A Performant and {User-Friendly} Framework for {Privacy-Preserving} Machine Learning
J Ma, Y Zheng, J Feng, D Zhao, H Wu, W Fang, J Tan, C Yu, B Zhang, ...
2023 USENIX Annual Technical Conference (USENIX ATC 23), 17-33, 2023
352023
Improving real-world password guessing attacks via bi-directional transformers
M Xu, J Yu, X Zhang, C Wang, S Zhang, H Wu, W Han
32nd USENIX Security Symposium (USENIX Security 23), 1001-1018, 2023
182023
pmpl: A robust multi-party learning framework with a privileged party
L Song, J Wang, Z Wang, X Tu, G Lin, W Ruan, H Wu, W Han
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications …, 2022
182022
SoK: Training Machine Learning Models over Multiple Sources with Privacy Preservation
L Song, H Wu, W Ruan, W Han
arXiv preprint arXiv:2012.03386, 2020
142020
Automated Enforcement of the Principle of Least Privilege over Data Source Access
H Wu, Z Yu, D Huang, H Zhang, W Han
2020 IEEE 19th International Conference on Trust, Security and Privacy in …, 2020
82020
Ditto: Quantization-aware Secure Inference of Transformers upon MPC
H Wu, W Fang, Y Zheng, J Ma, J Tan, Y Wang, L Wang
arXiv preprint arXiv:2405.05525, 2024
22024
Nimbus: Secure and Efficient Two-Party Inference for Transformers
Z Li, K Yang, J Tan, W Lu, H Wu, X Wang, Y Yu, D Zhao, Y Zheng, M Guo, ...
arXiv preprint arXiv:2411.15707, 2024
2024
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–8