Generative ai and large language models for cyber security: All insights you need
This paper provides a comprehensive review of the future of cybersecurity through
Generative AI and Large Language Models (LLMs). We explore LLM applications across …
Generative AI and Large Language Models (LLMs). We explore LLM applications across …
Foundational challenges in assuring alignment and safety of large language models
This work identifies 18 foundational challenges in assuring the alignment and safety of large
language models (LLMs). These challenges are organized into three different categories …
language models (LLMs). These challenges are organized into three different categories …
Unraveling Attacks to Machine Learning-Based IoT Systems: A Survey and the Open Libraries Behind Them
The advent of the Internet of Things (IoT) has brought forth an era of unprecedented
connectivity, with an estimated 80 billion smart devices expected to be in operation by the …
connectivity, with an estimated 80 billion smart devices expected to be in operation by the …
Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning
While existing backdoor attacks have successfully infected multimodal contrastive learning
models such as CLIP they can be easily countered by specialized backdoor defenses for …
models such as CLIP they can be easily countered by specialized backdoor defenses for …
Prompt-specific poisoning attacks on text-to-image generative models
Data poisoning attacks manipulate training data to introduce unexpected behaviors into
machine learning models at training time. For text-to-image generative models with massive …
machine learning models at training time. For text-to-image generative models with massive …
Prompt injection attacks and defenses in llm-integrated applications
Large Language Models (LLMs) are increasingly deployed as the backend for a variety of
real-world applications called LLM-Integrated Applications. Multiple recent works showed …
real-world applications called LLM-Integrated Applications. Multiple recent works showed …
Backdooring multimodal learning
Deep Neural Networks (DNNs) are vulnerable to backdoor attacks, which poison the training
set to alter the model prediction over samples with a specific trigger. While existing efforts …
set to alter the model prediction over samples with a specific trigger. While existing efforts …
Revisiting backdoor attacks against large vision-language models
Instruction tuning enhances large vision-language models (LVLMs) but raises security risks
through potential backdoor attacks due to their openness. Previous backdoor studies focus …
through potential backdoor attacks due to their openness. Previous backdoor studies focus …
Large language model supply chain: A research agenda
The rapid advancement of large language models (LLMs) has revolutionized artificial
intelligence, introducing unprecedented capabilities in natural language processing and …
intelligence, introducing unprecedented capabilities in natural language processing and …
Test-time backdoor attacks on multimodal large language models
Backdoor attacks are commonly executed by contaminating training data, such that a trigger
can activate predetermined harmful effects during the test phase. In this work, we present …
can activate predetermined harmful effects during the test phase. In this work, we present …