Differentially private natural language models: Recent advances and future directions

L Hu, I Habernal, L Shen, D Wang - arxiv preprint arxiv:2301.09112, 2023 - arxiv.org
Recent developments in deep learning have led to great success in various natural
language processing (NLP) tasks. However, these applications may involve data that …

Prompt-saw: Leveraging relation-aware graphs for textual prompt compression

MA Ali, Z Li, S Yang, K Cheng, Y Cao, T Huang… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) have shown exceptional abilities for multiple different
natural language processing tasks. While prompting is a crucial tool for LLM inference, we …

Leveraging logical rules in knowledge editing: A cherry on the top

K Cheng, MA Ali, S Yang, G Lin, Y Zhai, H Fei… - arxiv preprint arxiv …, 2024 - arxiv.org
Multi-hop Question Answering (MQA) under knowledge editing (KE) is a key challenge in
Large Language Models (LLMs). While best-performing solutions in this domain use a plan …

Advancing differential privacy: Where we are now and future directions for real-world deployment

R Cummings, D Desfontaines, D Evans… - arxiv preprint arxiv …, 2023 - arxiv.org
In this article, we present a detailed review of current practices and state-of-the-art
methodologies in the field of differential privacy (DP), with a focus of advancing DP's …

Multi-hop question answering under temporal knowledge editing

K Cheng, G Lin, H Fei, L Yu, MA Ali, L Hu… - arxiv preprint arxiv …, 2024 - arxiv.org
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant
attention in the era of large language models. However, existing models for MQA under KE …

Dialectical alignment: Resolving the tension of 3h and security threats of llms

S Yang, J Su, H Jiang, M Li, K Cheng, MA Ali… - arxiv preprint arxiv …, 2024 - arxiv.org
With the rise of large language models (LLMs), ensuring they embody the principles of being
helpful, honest, and harmless (3H), known as Human Alignment, becomes crucial. While …

Private Language Models via Truncated Laplacian Mechanism

T Huang, T Yang, I Habernal, L Hu, D Wang - arxiv preprint arxiv …, 2024 - arxiv.org
Deep learning models for NLP tasks are prone to variants of privacy attacks. To prevent
privacy leakage, researchers have investigated word-level perturbations, relying on the …

Generalized Eigenvalue Problems with Generative Priors

Z Liu, W Li, J Chen - arxiv preprint arxiv:2411.01326, 2024 - arxiv.org
Generalized eigenvalue problems (GEPs) find applications in various fields of science and
engineering. For example, principal component analysis, Fisher's discriminant analysis, and …

Differentially Private Sliced Inverse Regression: Minimax Optimality and Algorithm

X **a, L Zhang, Z Cai - arxiv preprint arxiv:2401.08150, 2024 - arxiv.org
Privacy preservation has become a critical concern in high-dimensional data analysis due to
the growing prevalence of data-driven applications. Proposed by Li (1991), sliced inverse …