Evaluations of machine learning privacy defenses are misleading
Empirical defenses for machine learning privacy forgo the provable guarantees of
differential privacy in the hope of achieving higher utility while resisting realistic adversaries …
differential privacy in the hope of achieving higher utility while resisting realistic adversaries …
Low-Cost High-Power Membership Inference Attacks
Membership inference attacks aim to detect if a particular data point was used in training a
model. We design a novel statistical test to perform robust membership inference attacks …
model. We design a novel statistical test to perform robust membership inference attacks …
The 2010 Census Confidentiality Protections Failed, Here's How and Why
Using only 34 published tables, we reconstruct five variables (census block, sex, age, race,
and ethnicity) in the confidential 2010 Census person records. Using the 38-bin age variable …
and ethnicity) in the confidential 2010 Census person records. Using the 38-bin age variable …
Privacy Analyses in Machine Learning
J Ye - Proceedings of the 2024 on ACM SIGSAC Conference …, 2024 - dl.acm.org
Machine learning models sometimes memorize sensitive training data features, posing
privacy risks. To control such privacy risks, Dwork et al. proposed the definition of differential …
privacy risks. To control such privacy risks, Dwork et al. proposed the definition of differential …
Provable Privacy Attacks on Trained Shallow Neural Networks
We study what provable privacy attacks can be shown on trained, 2-layer ReLU neural
networks. We explore two types of attacks; data reconstruction attacks, and membership …
networks. We explore two types of attacks; data reconstruction attacks, and membership …
Do Parameters Reveal More than Loss for Membership Inference?
Membership inference attacks aim to infer whether an individual record was used to train a
model, serving as a key tool for disclosure auditing. While such evaluations are useful to …
model, serving as a key tool for disclosure auditing. While such evaluations are useful to …
Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective
Given the complexity and lack of transparency in deep neural networks (DNNs), extensive
efforts have been made to make these systems more interpretable or explain their behaviors …
efforts have been made to make these systems more interpretable or explain their behaviors …
WaKA: Data Attribution using K-Nearest Neighbors and Membership Privacy Principles
P Mesana, C Bénesse, H Lautraite, G Caporossi… - arxiv preprint arxiv …, 2024 - arxiv.org
In this paper, we introduce WaKA (Wasserstein K-nearest neighbors Attribution), a novel
attribution method that leverages principles from the LiRA (Likelihood Ratio Attack) …
attribution method that leverages principles from the LiRA (Likelihood Ratio Attack) …
CARSI II: A Context-Driven Intelligent User Interface
Modern automotive infotainment systems offer a complex and wide array of controls and
features through various interaction methods. However, such complexity can distract the …
features through various interaction methods. However, such complexity can distract the …
[PDF][PDF] A Data-Centric Analysis of Membership Inference Attacks
H Ito, J Jälkö - 2024 - helda.helsinki.fi
A Data-Centric Analysis of Membership Inference Attacks Page 1 Master’s thesis Master’s
Programme in Data Science A Data-Centric Analysis of Membership Inference Attacks Hibiki Ito …
Programme in Data Science A Data-Centric Analysis of Membership Inference Attacks Hibiki Ito …