Threats, attacks, and defenses in machine unlearning: A survey
Machine Unlearning (MU) has recently gained considerable attention due to its potential to
achieve Safe AI by removing the influence of specific data from trained Machine Learning …
achieve Safe AI by removing the influence of specific data from trained Machine Learning …
Efficient dropout-resilient aggregation for privacy-preserving machine learning
Machine learning (ML) has been widely recognized as an enabler of the global trend of
digital transformation. With the increasing adoption of data-hungry machine learning …
digital transformation. With the increasing adoption of data-hungry machine learning …
Long-term privacy-preserving aggregation with user-dynamics for federated learning
Privacy-preserving aggregation protocol is an essential building block in privacy-enhanced
federated learning (FL), which enables the server to obtain the sum of users' locally trained …
federated learning (FL), which enables the server to obtain the sum of users' locally trained …
Trustworthy, responsible, and safe ai: A comprehensive architectural framework for ai safety with challenges and mitigations
AI Safety is an emerging area of critical importance to the safe adoption and deployment of
AI systems. With the rapid proliferation of AI and especially with the recent advancement of …
AI systems. With the rapid proliferation of AI and especially with the recent advancement of …
Privacy-preserving deep learning based on multiparty secure computation: A survey
Deep learning (DL) has demonstrated superior success in various of applications, such as
image classification, speech recognition, and anomalous detection. The unprecedented …
image classification, speech recognition, and anomalous detection. The unprecedented …
Privacy-Preserving Federated Unlearning with Certified Client Removal
In recent years, Federated Unlearning (FU) has gained attention for addressing the removal
of a client's influence from the global model in Federated Learning (FL) systems, thereby …
of a client's influence from the global model in Federated Learning (FL) systems, thereby …
Privacy-preserving inference resistant to model extraction attacks
Abstract Privacy-Preserving Deep Learning (PPDL) has been successfully applied in the
inference phase to preserve the privacy of input data. However, PPDL models are …
inference phase to preserve the privacy of input data. However, PPDL models are …
Ents: An Efficient Three-party Training Framework for Decision Trees by Communication Optimization
Multi-party training frameworks for decision trees based on secure multi-party computation
enable multiple parties to train high-performance models on distributed private data with …
enable multiple parties to train high-performance models on distributed private data with …
AD-MPC: Fully Asynchronous Dynamic MPC with Guaranteed Output Delivery
Traditional secure multiparty computation (MPC) protocols presuppose a fixed set of
participants throughout the computational process. To address this limitation, Fluid MPC …
participants throughout the computational process. To address this limitation, Fluid MPC …
A hybrid secure computation framework for graph neural networks
Y Ren, Y Jie, Q Wang, B Zhang… - … Conference on Privacy …, 2021 - ieeexplore.ieee.org
The Multi-party Secure Computation (MPC)-based methods for privacy-preserving Graph
Neural Networks (GNNs) are still challenged by high communication overhead. Moreover …
Neural Networks (GNNs) are still challenged by high communication overhead. Moreover …