Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
When federated learning meets privacy-preserving computation
J Chen, H Yan, Z Liu, M Zhang, H **ong… - ACM Computing Surveys, 2024 - dl.acm.org
Nowadays, with the development of artificial intelligence (AI), privacy issues attract wide
attention from society and individuals. It is desirable to make the data available but invisible …
attention from society and individuals. It is desirable to make the data available but invisible …
BFL-SA: Blockchain-based federated learning via enhanced secure aggregation
Federated learning, involving a central server and multiple clients, aims to keep data local
but raises privacy concerns like data exposure and participation privacy. Secure …
but raises privacy concerns like data exposure and participation privacy. Secure …
LERNA: secure single-server aggregation via key-homomorphic masking
This paper introduces LERNA, a new framework for single-server secure aggregation. Our
protocols are tailored to the setting where multiple consecutive aggregation phases are …
protocols are tailored to the setting where multiple consecutive aggregation phases are …
SoK: Public Randomness
Public randomness is a fundamental component in many cryptographic protocols and
distributed systems and often plays a crucial role in ensuring their security, fairness, and …
distributed systems and often plays a crucial role in ensuring their security, fairness, and …
Guaranteeing data privacy in federated unlearning with dynamic user participation
Federated Unlearning (FU) is gaining prominence for its capability to eliminate influences of
specific users' data from trained global Federated Learning (FL) models. A straightforward …
specific users' data from trained global Federated Learning (FL) models. A straightforward …
TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated Learning
Federated learning is a computing paradigm that enhances privacy by enabling multiple
parties to collaboratively train a machine learning model without revealing personal data …
parties to collaboratively train a machine learning model without revealing personal data …
{POPSTAR}: Lightweight Threshold Reporting with Reduced Leakage
This paper proposes POPSTAR, a new lightweight protocol for the private computation of
heavy hitters, also known as a private threshold reporting system. In such a protocol, the …
heavy hitters, also known as a private threshold reporting system. In such a protocol, the …
Scale-mia: A scalable model inversion attack against secure federated learning via latent space reconstruction
Federated learning is known for its capability to safeguard participants' data privacy.
However, recently emerged model inversion attacks (MIAs) have shown that a malicious …
However, recently emerged model inversion attacks (MIAs) have shown that a malicious …
Two-Tier Data Packing in RLWE-based Homomorphic Encryption for Secure Federated Learning
Homomorphic Encryption (HE) facilitates the preservation of privacy in federated learning
(FL) aggregation. However, HE imposes significant computational and communication …
(FL) aggregation. However, HE imposes significant computational and communication …
Computationally secure aggregation and private information retrieval in the shuffle model
The shuffle model has recently emerged as a popular setting for differential privacy, where
clients can communicate with a central server using anonymous channels or an …
clients can communicate with a central server using anonymous channels or an …