Security and privacy challenges of large language models: A survey
Large language models (LLMs) have demonstrated extraordinary capabilities and
contributed to multiple fields, such as generating and summarizing text, language …
contributed to multiple fields, such as generating and summarizing text, language …
Dataset distillation: A comprehensive review
Recent success of deep learning is largely attributed to the sheer amount of data used for
training deep neural networks. Despite the unprecedented success, the massive data …
training deep neural networks. Despite the unprecedented success, the massive data …
Poisoning web-scale training datasets is practical
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
Anti-backdoor learning: Training clean models on poisoned data
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs).
While existing defense methods have demonstrated promising results on detecting or …
While existing defense methods have demonstrated promising results on detecting or …
Adversarial neuron pruning purifies backdoored deep models
As deep neural networks (DNNs) are growing larger, their requirements for computational
resources become huge, which makes outsourcing training more popular. Training in a third …
resources become huge, which makes outsourcing training more popular. Training in a third …
Invisible backdoor attack with sample-specific triggers
Recently, backdoor attacks pose a new security threat to the training process of deep neural
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …
Lira: Learnable, imperceptible and robust backdoor attacks
Recently, machine learning models have demonstrated to be vulnerable to backdoor
attacks, primarily due to the lack of transparency in black-box models such as deep neural …
attacks, primarily due to the lack of transparency in black-box models such as deep neural …
Backdoor learning: A survey
Y Li, Y Jiang, Z Li, ST **-based backdoor attack
With the thriving of deep learning and the widespread practice of using pre-trained networks,
backdoor attacks have become an increasing security threat drawing many research …
backdoor attacks have become an increasing security threat drawing many research …
Reflection backdoor: A natural backdoor attack on deep neural networks
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at
training time. A backdoor attack installs a backdoor into the victim model by injecting a …
training time. A backdoor attack installs a backdoor into the victim model by injecting a …