[PDF][PDF] DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models.
Abstract Generative Pre-trained Transformer (GPT) models have exhibited exciting progress
in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the …
in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the …
Unique security and privacy threats of large language model: A comprehensive survey
With the rapid development of artificial intelligence, large language models (LLMs) have
made remarkable advancements in natural language processing. These models are trained …
made remarkable advancements in natural language processing. These models are trained …
Differentially private natural language models: Recent advances and future directions
Recent developments in deep learning have led to great success in various natural
language processing (NLP) tasks. However, these applications may involve data that …
language processing (NLP) tasks. However, these applications may involve data that …
Privacy in large language models: Attacks, defenses and future directions
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …
effectively tackle various downstream NLP tasks and unify these tasks into generative …
Llm-pbe: Assessing data privacy in large language models
Large Language Models (LLMs) have become integral to numerous domains, significantly
advancing applications in data management, mining, and analysis. Their profound …
advancing applications in data management, mining, and analysis. Their profound …
Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?
Diffusion models for text-to-image (T2I) synthesis, such as Stable Diffusion (SD), have
recently demonstrated exceptional capabilities for generating high-quality content. However …
recently demonstrated exceptional capabilities for generating high-quality content. However …
Risk taxonomy, mitigation, and assessment benchmarks of large language model systems
Large language models (LLMs) have strong capabilities in solving diverse natural language
processing tasks. However, the safety and security issues of LLM systems have become the …
processing tasks. However, the safety and security issues of LLM systems have become the …
Privacy-preserving in-context learning with differentially private few-shot generation
We study the problem of in-context learning (ICL) with large language models (LLMs) on
private datasets. This scenario poses privacy risks, as LLMs may leak or regurgitate the …
private datasets. This scenario poses privacy risks, as LLMs may leak or regurgitate the …
Dp-opt: Make large language model your privacy-preserving prompt engineer
Large Language Models (LLMs) have emerged as dominant tools for various tasks,
particularly when tailored for a specific target by prompt tuning. Nevertheless, concerns …
particularly when tailored for a specific target by prompt tuning. Nevertheless, concerns …
Privlm-bench: A multi-level privacy evaluation benchmark for language models
The rapid development of language models (LMs) brings unprecedented accessibility and
usage for both models and users. On the one hand, powerful LMs achieve state-of-the-art …
usage for both models and users. On the one hand, powerful LMs achieve state-of-the-art …