Medical large language models are vulnerable to data-poisoning attacks
The adoption of large language models (LLMs) in healthcare demands a careful analysis of
their potential to spread false medical knowledge. Because LLMs ingest massive volumes of …
their potential to spread false medical knowledge. Because LLMs ingest massive volumes of …
Test-time backdoor attacks on multimodal large language models
Backdoor attacks are commonly executed by contaminating training data, such that a trigger
can activate predetermined harmful effects during the test phase. In this work, we present …
can activate predetermined harmful effects during the test phase. In this work, we present …
Transferring backdoors between large language models by knowledge distillation
Backdoor Attacks have been a serious vulnerability against Large Language Models
(LLMs). However, previous methods only reveal such risk in specific models, or present …
(LLMs). However, previous methods only reveal such risk in specific models, or present …
[PDF][PDF] BAIT: Large Language Model Backdoor Scanning by Inverting Attack Target
Recent literature has shown that LLMs are vulnerable to backdoor attacks, where malicious
attackers inject a secret token sequence (ie, trigger) into training prompts and enforce their …
attackers inject a secret token sequence (ie, trigger) into training prompts and enforce their …
SynGhost: Imperceptible and Universal Task-agnostic Backdoor Attack in Pre-trained Language Models
Pre-training has been a necessary phase for deploying pre-trained language models
(PLMs) to achieve remarkable performance in downstream tasks. However, we empirically …
(PLMs) to achieve remarkable performance in downstream tasks. However, we empirically …
Watch out for your agents! investigating backdoor threats to llm-based agents
Leveraging the rapid development of Large Language Models LLMs, LLM-based agents
have been developed to handle various real-world applications, including finance …
have been developed to handle various real-world applications, including finance …
TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models
Large language models (LLMs) have raised concerns about potential security threats
despite performing significantly in Natural Language Processing (NLP). Backdoor attacks …
despite performing significantly in Natural Language Processing (NLP). Backdoor attacks …
BrInstFlip: A Universal Tool for Attacking DNN-Based Power Line Fault Detection Models
Y Jiang, Y Xu, Z Liang, W Xu, T Dong… - 2024 IEEE/CIC …, 2024 - ieeexplore.ieee.org
Deep learning neural network (DNN) models are currently experiencing significant success
in domains like image classification. In the realm of power grids, there have been numerous …
in domains like image classification. In the realm of power grids, there have been numerous …
FP-OCS: A Fingerprint Based Ownership Detection System for Insulator Fault Detection Model
In smart grids, the robustness and reliability of the transmission system depend on the
operational integrity of the insulators. The success of deep learning has facilitated the …
operational integrity of the insulators. The success of deep learning has facilitated the …