Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Fedfed: Feature distillation against data heterogeneity in federated learning
Federated learning (FL) typically faces data heterogeneity, ie, distribution shifting among
clients. Sharing clients' information has shown great potentiality in mitigating data …
clients. Sharing clients' information has shown great potentiality in mitigating data …
Differentially private fine-tuning of language models
We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-
scale pre-trained language models, which achieve the state-of-the-art privacy versus utility …
scale pre-trained language models, which achieve the state-of-the-art privacy versus utility …
Differentially private natural language models: Recent advances and future directions
Recent developments in deep learning have led to great success in various natural
language processing (NLP) tasks. However, these applications may involve data that …
language processing (NLP) tasks. However, these applications may involve data that …
On protecting the data privacy of large language models (llms): A survey
Large language models (LLMs) are complex artificial intelligence systems capable of
understanding, generating and translating human language. They learn language patterns …
understanding, generating and translating human language. They learn language patterns …
Privacy in large language models: Attacks, defenses and future directions
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …
effectively tackle various downstream NLP tasks and unify these tasks into generative …
Privacy issues in large language models: A survey
S Neel, P Chang - arxiv preprint arxiv:2312.06717, 2023 - arxiv.org
This is the first survey of the active area of AI research that focuses on privacy issues in
Large Language Models (LLMs). Specifically, we focus on work that red-teams models to …
Large Language Models (LLMs). Specifically, we focus on work that red-teams models to …
Preserving privacy in large language models: A survey on current threats and solutions
Large Language Models (LLMs) represent a significant advancement in artificial
intelligence, finding applications across various domains. However, their reliance on …
intelligence, finding applications across various domains. However, their reliance on …
Sok: Cryptographic neural-network computation
We studied 53 privacy-preserving neural-network papers in 2016-2022 based on
cryptography (without trusted processors or differential privacy), 16 of which only use …
cryptography (without trusted processors or differential privacy), 16 of which only use …
Privacy-preserving instructions for aligning large language models
Service providers of large language model (LLM) applications collect user instructions in the
wild and use them in further aligning LLMs with users' intentions. These instructions, which …
wild and use them in further aligning LLMs with users' intentions. These instructions, which …
Privacy preserving prompt engineering: A survey
Pre-trained language models (PLMs) have demonstrated significant proficiency in solving a
wide range of general natural language processing (NLP) tasks. Researchers have …
wide range of general natural language processing (NLP) tasks. Researchers have …