Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
How to dp-fy ml: A practical guide to machine learning with differential privacy
Abstract Machine Learning (ML) models are ubiquitous in real-world applications and are a
constant focus of research. Modern ML models have become more complex, deeper, and …
constant focus of research. Modern ML models have become more complex, deeper, and …
On provable copyright protection for generative models
There is a growing concern that learned conditional generative models may output samples
that are substantially similar to some copyrighted data $ C $ that was in their training set. We …
that are substantially similar to some copyrighted data $ C $ that was in their training set. We …
Differentially private natural language models: Recent advances and future directions
Recent developments in deep learning have led to great success in various natural
language processing (NLP) tasks. However, these applications may involve data that …
language processing (NLP) tasks. However, these applications may involve data that …
Privacy side channels in machine learning systems
Most current approaches for protecting privacy in machine learning (ML) assume that
models exist in a vacuum. Yet, in reality, these models are part of larger systems that include …
models exist in a vacuum. Yet, in reality, these models are part of larger systems that include …
Can public large language models help private cross-device federated learning?
We study (differentially) private federated learning (FL) of language models. The language
models in cross-device FL are relatively small, which can be trained with meaningful formal …
models in cross-device FL are relatively small, which can be trained with meaningful formal …
Identifying and mitigating privacy risks stemming from language models: A survey
V Smith, AS Shamsabadi, C Ashurst… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs) have shown greatly enhanced performance in recent years,
attributed to increased size and extensive training data. This advancement has led to …
attributed to increased size and extensive training data. This advancement has led to …
Purifying large language models by ensembling a small language model
The emerging success of large language models (LLMs) heavily relies on collecting
abundant training data from external (untrusted) sources. Despite substantial efforts devoted …
abundant training data from external (untrusted) sources. Despite substantial efforts devoted …
Vip: A differentially private foundation model for computer vision
Artificial intelligence (AI) has seen a tremendous surge in capabilities thanks to the use of
foundation models trained on internet-scale data. On the flip side, the uncurated nature of …
foundation models trained on internet-scale data. On the flip side, the uncurated nature of …
Textfusion: Privacy-preserving pre-trained model inference via token fusion
Recently, more and more pre-trained language models are released as a cloud service. It
allows users who lack computing resources to perform inference with a powerful model by …
allows users who lack computing resources to perform inference with a powerful model by …
Assessing privacy risks in language models: A case study on summarization tasks
Large language models have revolutionized the field of NLP by achieving state-of-the-art
performance on various tasks. However, there is a concern that these models may disclose …
performance on various tasks. However, there is a concern that these models may disclose …