Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Sociotechnical safeguards for genomic data privacy
Recent developments in a variety of sectors, including health care, research and the direct-
to-consumer industry, have led to a dramatic increase in the amount of genomic data that …
to-consumer industry, have led to a dramatic increase in the amount of genomic data that …
Privacy challenges and research opportunities for genomic data sharing
L Bonomi, Y Huang, L Ohno-Machado - Nature genetics, 2020 - nature.com
The sharing of genomic data holds great promise in advancing precision medicine and
providing personalized treatments and other types of interventions. However, these …
providing personalized treatments and other types of interventions. However, these …
A survey of machine unlearning
Today, computer systems hold large amounts of personal data. Yet while such an
abundance of data allows breakthroughs in artificial intelligence, and especially machine …
abundance of data allows breakthroughs in artificial intelligence, and especially machine …
Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning
Deep neural networks are susceptible to various inference attacks as they remember
information about their training data. We design white-box inference attacks to perform a …
information about their training data. We design white-box inference attacks to perform a …
Privacy risk in machine learning: Analyzing the connection to overfitting
Machine learning algorithms, when applied to sensitive data, pose a distinct threat to
privacy. A growing body of prior work demonstrates that models produced by these …
privacy. A growing body of prior work demonstrates that models produced by these …
Machine learning with membership privacy using adversarial regularization
Machine learning models leak significant amount of information about their training sets,
through their predictions. This is a serious privacy concern for the users of machine learning …
through their predictions. This is a serious privacy concern for the users of machine learning …
Stolen memories: Leveraging model memorization for calibrated {White-Box} membership inference
K Leino, M Fredrikson - 29th USENIX security symposium (USENIX …, 2020 - usenix.org
Membership inference (MI) attacks exploit the fact that machine learning algorithms
sometimes leak information about their training data through the learned model. In this work …
sometimes leak information about their training data through the learned model. In this work …
Model inversion attacks that exploit confidence information and basic countermeasures
Machine-learning (ML) algorithms are increasingly utilized in privacy-sensitive applications
such as predicting lifestyle choices, making medical diagnoses, and facial recognition. In a …
such as predicting lifestyle choices, making medical diagnoses, and facial recognition. In a …
Sok: Secure aggregation based on cryptographic schemes for federated learning
Secure aggregation consists of computing the sum of data collected from multiple sources
without disclosing these individual inputs. Secure aggregation has been found useful for …
without disclosing these individual inputs. Secure aggregation has been found useful for …
Towards making systems forget with machine unlearning
Today's systems produce a rapidly exploding amount of data, and the data further derives
more data, forming a complex data propagation network that we call the data's lineage …
more data, forming a complex data propagation network that we call the data's lineage …