Bag of tricks for training data extraction from language models

W Yu, T Pang, Q Liu, C Du, B Kang… - International …, 2023 - proceedings.mlr.press
With the advance of language models, privacy protection is receiving more attention.
Training data extraction is therefore of great importance, as it can serve as a potential tool to …

Llm-pbe: Assessing data privacy in large language models

Q Li, J Hong, C **e, J Tan, R **n, J Hou, X Yin… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) have become integral to numerous domains, significantly
advancing applications in data management, mining, and analysis. Their profound …

Multi-PA: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models

J Zhang, X Cao, Z Han, S Shan, X Chen - arxiv preprint arxiv:2412.19496, 2024 - arxiv.org
Large Vision-Language Models (LVLMs) exhibit impressive potential across various tasks
but also face significant privacy risks, limiting their practical applications. Current researches …

Privacy-Engineered Value Decomposition Networks for Cooperative Multi-Agent Reinforcement Learning

P Gohari, M Hale, U Topcu - 2023 62nd IEEE Conference on …, 2023 - ieeexplore.ieee.org
In cooperative multi-agent reinforcement learning (Co-MARL), a team of agents must jointly
optimize the team's longterm rewards to learn a designated task. Optimizing rewards as a …

[PDF][PDF] 4.5 Privacy enhancing technologies

M De Cock, Z Erkin… - Privacy in Speech …, 2022 - researchportal.vub.be
Privacy-enhancing technologies (PETs) provide technical building blocks for achieving
privacy by design and can be defined as technologies that embody fundamental data …