Large language models (LLMs) trained on web-scale datasets raise substantial concerns regarding permissible data usage. One major question is whether these models" memorize" …
Abstract Language models (LMs) derive their capabilities from extensive training on diverse data, including copyrighted material. These models can memorize and generate content …
Large language models (LLMs) have advanced to encompass extensive knowledge across diverse domains. Yet controlling what a large language model should not know is important …
B Yan, K Li, M Xu, Y Dong, Y Zhang, Z Ren… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) are complex artificial intelligence systems capable of understanding, generating and translating human language. They learn language patterns …
Current research in adversarial robustness of LLMs focuses on\textit {discrete} input manipulations in the natural language space, which can be directly transferred to\textit …
Large Language Models (LLMs) often memorize sensitive, private, or copyrighted data during pre-training. LLM unlearning aims to eliminate the influence of undesirable data from …