Flip: A provable defense framework for backdoor mitigation in federated learning
Federated Learning (FL) is a distributed learning paradigm that enables different parties to
train a model together for high quality and strong privacy protection. In this scenario …
train a model together for high quality and strong privacy protection. In this scenario …
Reinforcement learning-based black-box model inversion attacks
Abstract Model inversion attacks are a type of privacy attack that reconstructs private data
used to train a machine learning model, solely by accessing the model. Recently, white-box …
used to train a machine learning model, solely by accessing the model. Recently, white-box …
A closer look at gan priors: Exploiting intermediate features for enhanced model inversion attacks
Abstract Model Inversion (MI) attacks aim to reconstruct privacy-sensitive training data from
released models by utilizing output information, raising extensive concerns about the …
released models by utilizing output information, raising extensive concerns about the …
" Get in Researchers; We're Measuring Reproducibility": A Reproducibility Study of Machine Learning Papers in Tier 1 Security Conferences
Reproducibility is crucial to the advancement of science; it strengthens confidence in
seemingly contradictory results and expands the boundaries of known discoveries …
seemingly contradictory results and expands the boundaries of known discoveries …
Privacy leakage on dnns: A survey of model inversion attacks and defenses
Deep Neural Networks (DNNs) have revolutionized various domains with their exceptional
performance across numerous applications. However, Model Inversion (MI) attacks, which …
performance across numerous applications. However, Model Inversion (MI) attacks, which …
Label-only model inversion attacks via knowledge transfer
In a model inversion (MI) attack, an adversary abuses access to a machine learning (ML)
model to infer and reconstruct private training data. Remarkable progress has been made in …
model to infer and reconstruct private training data. Remarkable progress has been made in …
Improving robustness to model inversion attacks via sparse coding architectures
Recent model inversion attack algorithms permit adversaries to reconstruct a neural
network's private and potentially sensitive training data by repeatedly querying the network …
network's private and potentially sensitive training data by repeatedly querying the network …
All Rivers Run to the Sea: Private Learning with Asymmetric Flows
Data privacy is of great concern in cloud machine-learning service platforms when sensitive
data are exposed to service providers. While private computing environments (eg secure …
data are exposed to service providers. While private computing environments (eg secure …
A gan-based defense framework against model inversion attacks
With the development of deep learning, deep neural network (DNN)-based application have
become an indispensable aspect of daily life. However, recent studies have shown that …
become an indispensable aspect of daily life. However, recent studies have shown that …
Mibench: A comprehensive benchmark for model inversion attack and defense
Model Inversion (MI) attacks aim at leveraging the output information of target models to
reconstruct privacy-sensitive training data, raising widespread concerns on privacy threats of …
reconstruct privacy-sensitive training data, raising widespread concerns on privacy threats of …