Flip: A provable defense framework for backdoor mitigation in federated learning

K Zhang, G Tao, Q Xu, S Cheng, S An, Y Liu… - arxiv preprint arxiv …, 2022 - arxiv.org
Federated Learning (FL) is a distributed learning paradigm that enables different parties to
train a model together for high quality and strong privacy protection. In this scenario …

Reinforcement learning-based black-box model inversion attacks

G Han, J Choi, H Lee, J Kim - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Abstract Model inversion attacks are a type of privacy attack that reconstructs private data
used to train a machine learning model, solely by accessing the model. Recently, white-box …

A closer look at gan priors: Exploiting intermediate features for enhanced model inversion attacks

Y Qiu, H Fang, H Yu, B Chen, MK Qiu… - European Conference on …, 2024 - Springer
Abstract Model Inversion (MI) attacks aim to reconstruct privacy-sensitive training data from
released models by utilizing output information, raising extensive concerns about the …

" Get in Researchers; We're Measuring Reproducibility": A Reproducibility Study of Machine Learning Papers in Tier 1 Security Conferences

D Olszewski, A Lu, C Stillman, K Warren… - Proceedings of the …, 2023 - dl.acm.org
Reproducibility is crucial to the advancement of science; it strengthens confidence in
seemingly contradictory results and expands the boundaries of known discoveries …

Privacy leakage on dnns: A survey of model inversion attacks and defenses

H Fang, Y Qiu, H Yu, W Yu, J Kong, B Chong… - arxiv preprint arxiv …, 2024 - arxiv.org
Deep Neural Networks (DNNs) have revolutionized various domains with their exceptional
performance across numerous applications. However, Model Inversion (MI) attacks, which …

Label-only model inversion attacks via knowledge transfer

BN Nguyen, K Chandrasegaran… - Advances in …, 2024 - proceedings.neurips.cc
In a model inversion (MI) attack, an adversary abuses access to a machine learning (ML)
model to infer and reconstruct private training data. Remarkable progress has been made in …

Improving robustness to model inversion attacks via sparse coding architectures

SV Dibbo, A Breuer, J Moore, M Teti - European Conference on Computer …, 2024 - Springer
Recent model inversion attack algorithms permit adversaries to reconstruct a neural
network's private and potentially sensitive training data by repeatedly querying the network …

All Rivers Run to the Sea: Private Learning with Asymmetric Flows

Y Niu, RE Ali, S Prakash… - Proceedings of the …, 2024 - openaccess.thecvf.com
Data privacy is of great concern in cloud machine-learning service platforms when sensitive
data are exposed to service providers. While private computing environments (eg secure …

A gan-based defense framework against model inversion attacks

X Gong, Z Wang, S Li, Y Chen… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
With the development of deep learning, deep neural network (DNN)-based application have
become an indispensable aspect of daily life. However, recent studies have shown that …

Mibench: A comprehensive benchmark for model inversion attack and defense

Y Qiu, H Yu, H Fang, W Yu, B Chen, X Wang… - arxiv preprint arxiv …, 2024 - arxiv.org
Model Inversion (MI) attacks aim at leveraging the output information of target models to
reconstruct privacy-sensitive training data, raising widespread concerns on privacy threats of …