AirGapAgent: Protecting Privacy-Conscious Conversational Agents

E Bagdasarian, R Yi, S Ghalebikesabi… - Proceedings of the …, 2024 - dl.acm.org
The growing use of large language model (LLM)-based conversational agents to manage
sensitive user data raises significant privacy concerns. While these agents excel at …

Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents

IC Ngong, S Kadhe, H Wang, K Murugesan… - Workshop on Socially …, 2024 - openreview.net
Conversational agents are increasingly woven into individuals' personal lives, yet users
often underestimate the privacy risks involved. In this paper, based on the principles of …

CASE-Bench: Context-Aware Safety Evaluation Benchmark for Large Language Models

G Sun, X Zhan, S Feng, PC Woodland… - arxiv preprint arxiv …, 2025 - arxiv.org
Aligning large language models (LLMs) with human values is essential for their safe
deployment and widespread adoption. Current LLM safety benchmarks often focus solely on …

Permissive Information-Flow Analysis for Large Language Models

SA Siddiqui, R Gaonkar, B Köpf, D Krueger… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) are rapidly becoming commodity components of larger
software systems. This poses natural security and privacy problems: poisoned data retrieved …

Position: Contextual Integrity Washing for Language Models

Y Shvartzshnaider, V Duddu - arxiv preprint arxiv:2501.19173, 2025 - arxiv.org
Machine learning community is discovering Contextual Integrity (CI) as a useful framework
to assess the privacy implications of large language models (LLMs). This is an encouraging …

AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure

X Chen, Z Zhang, F Yang, X Qin, C Du, X Cheng… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language model (LLM)-based AI delegates are increasingly utilized to act on behalf of
users, assisting them with a wide range of tasks through conversational interfaces. Despite …