Generative ai and large language models for cyber security: All insights you need

MA Ferrag, F Alwahedi, A Battah, B Cherif… - Available at SSRN …, 2024 - papers.ssrn.com
This paper provides a comprehensive review of the future of cybersecurity through
Generative AI and Large Language Models (LLMs). We explore LLM applications across …

Foundational challenges in assuring alignment and safety of large language models

U Anwar, A Saparov, J Rando, D Paleka… - arxiv preprint arxiv …, 2024 - arxiv.org
This work identifies 18 foundational challenges in assuring the alignment and safety of large
language models (LLMs). These challenges are organized into three different categories …

Unraveling Attacks to Machine Learning-Based IoT Systems: A Survey and the Open Libraries Behind Them

C Liu, B Chen, W Shao, C Zhang… - IEEE Internet of …, 2024 - ieeexplore.ieee.org
The advent of the Internet of Things (IoT) has brought forth an era of unprecedented
connectivity, with an estimated 80 billion smart devices expected to be in operation by the …

Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning

S Liang, M Zhu, A Liu, B Wu, X Cao… - Proceedings of the …, 2024 - openaccess.thecvf.com
While existing backdoor attacks have successfully infected multimodal contrastive learning
models such as CLIP they can be easily countered by specialized backdoor defenses for …

Prompt-specific poisoning attacks on text-to-image generative models

S Shan, W Ding, J Passananti, H Zheng… - arxiv preprint arxiv …, 2023 - arxiv.org
Data poisoning attacks manipulate training data to introduce unexpected behaviors into
machine learning models at training time. For text-to-image generative models with massive …

Prompt injection attacks and defenses in llm-integrated applications

Y Liu, Y Jia, R Geng, J Jia, NZ Gong - arxiv preprint arxiv:2310.12815, 2023 - arxiv.org
Large Language Models (LLMs) are increasingly deployed as the backend for a variety of
real-world applications called LLM-Integrated Applications. Multiple recent works showed …

Backdooring multimodal learning

X Han, Y Wu, Q Zhang, Y Zhou, Y Xu… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep Neural Networks (DNNs) are vulnerable to backdoor attacks, which poison the training
set to alter the model prediction over samples with a specific trigger. While existing efforts …

Revisiting backdoor attacks against large vision-language models

S Liang, J Liang, T Pang, C Du, A Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
Instruction tuning enhances large vision-language models (LVLMs) but raises security risks
through potential backdoor attacks due to their openness. Previous backdoor studies focus …

Large language model supply chain: A research agenda

S Wang, Y Zhao, X Hou, H Wang - ACM Transactions on Software …, 2024 - dl.acm.org
The rapid advancement of large language models (LLMs) has revolutionized artificial
intelligence, introducing unprecedented capabilities in natural language processing and …

Test-time backdoor attacks on multimodal large language models

D Lu, T Pang, C Du, Q Liu, X Yang, M Lin - arxiv preprint arxiv …, 2024 - arxiv.org
Backdoor attacks are commonly executed by contaminating training data, such that a trigger
can activate predetermined harmful effects during the test phase. In this work, we present …