[HTML][HTML] When llms meet cybersecurity: A systematic literature review

J Zhang, H Bu, H Wen, Y Liu, H Fei… - …, 2025 - cybersecurity.springeropen.com
The rapid development of large language models (LLMs) has opened new avenues across
various fields, including cybersecurity, which faces an evolving threat landscape and …

Eia: Environmental injection attack on generalist web agents for privacy leakage

Z Liao, L Mo, C Xu, M Kang, J Zhang, C **ao… - arxiv preprint arxiv …, 2024 - arxiv.org
Generalist web agents have demonstrated remarkable potential in autonomously
completing a wide range of tasks on real websites, significantly boosting human productivity …

Multimodal large language models for phishing webpage detection and identification

J Lee, P Lim, B Hooi, DM Divakaran - arxiv preprint arxiv:2408.05941, 2024 - arxiv.org
To address the challenging problem of detecting phishing webpages, researchers have
developed numerous solutions, in particular those based on machine learning (ML) …

Security matrix for multimodal agents on mobile devices: A systematic and proof of concept study

Y Yang, X Yang, S Li, C Lin, Z Zhao, C Shen… - arxiv preprint arxiv …, 2024 - arxiv.org
The rapid progress in the reasoning capability of the Multi-modal Large Language Models
(MLLMs) has triggered the development of autonomous agent systems on mobile devices …

Advweb: Controllable black-box attacks on vlm-powered web agents

C Xu, M Kang, J Zhang, Z Liao, L Mo, M Yuan… - arxiv preprint arxiv …, 2024 - arxiv.org
Vision Language Models (VLMs) have revolutionized the creation of generalist web agents,
empowering them to autonomously complete diverse tasks on real-world websites, thereby …

Adaptivebackdoor: Backdoored language model agents that detect human overseers

H Wang, R Zhong, J Wen… - ICML 2024 Next …, 2024 - openreview.net
As humans grant language model (LM) agents more access to their machines, we speculate
a new form of cyber attack, AdaptiveBackdoor, where an LM agent is backdoored to detect …

FATH: Authentication-based Test-time Defense against Indirect Prompt Injection Attacks

J Wang, F Wu, W Li, J Pan, E Suh, ZM Mao… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have been widely deployed as the backbone with additional
tools and text information for real-world applications. However, integrating external …

Towards Action Hijacking of Large Language Model-based Agent

Y Zhang, K Chen, X Jiang, Y Sun, R Wang… - arxiv preprint arxiv …, 2024 - arxiv.org
In the past few years, intelligent agents powered by large language models (LLMs) have
achieved remarkable progress in performing complex tasks. These LLM-based agents …

SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach

R Sun, J Chang, H Pearce, C **ao, B Li, Q Wu… - arxiv preprint arxiv …, 2024 - arxiv.org
Multimodal foundation models (MFMs) represent a significant advancement in artificial
intelligence, combining diverse data modalities to enhance learning and understanding …

AEIA-MN: Evaluating the Robustness of Multimodal LLM-Powered Mobile Agents Against Active Environmental Injection Attacks

Y Chen, X Hu, K Yin, J Li, S Zhang - arxiv preprint arxiv:2502.13053, 2025 - arxiv.org
As researchers continuously optimize AI agents to perform tasks more effectively within
operating systems, they often neglect to address the critical need for enabling these agents …