Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Rethinking machine unlearning for large language models
We explore machine unlearning in the domain of large language models (LLMs), referred to
as LLM unlearning. This initiative aims to eliminate undesirable data influence (for example …
as LLM unlearning. This initiative aims to eliminate undesirable data influence (for example …
Negative preference optimization: From catastrophic collapse to effective unlearning
Large Language Models (LLMs) often memorize sensitive, private, or copyrighted data
during pre-training. LLM unlearning aims to eliminate the influence of undesirable data from …
during pre-training. LLM unlearning aims to eliminate the influence of undesirable data from …
Offset unlearning for large language models
Despite the strong capabilities of Large Language Models (LLMs) to acquire knowledge
from their training corpora, the memorization of sensitive information in the corpora such as …
from their training corpora, the memorization of sensitive information in the corpora such as …
Min-k%++: Improved baseline for detecting pre-training data from large language models
The problem of pre-training data detection for large language models (LLMs) has received
growing attention due to its implications in critical issues like copyright violation and test data …
growing attention due to its implications in critical issues like copyright violation and test data …
Machine unlearning in generative ai: A survey
Generative AI technologies have been deployed in many places, such as (multimodal) large
language models and vision generative models. Their remarkable performance should be …
language models and vision generative models. Their remarkable performance should be …
Soul: Unlocking the power of second-order optimization for llm unlearning
Large Language Models (LLMs) have highlighted the necessity of effective unlearning
mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims …
mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims …
Towards efficient and effective unlearning of large language models for recommendation
Conclusion In this letter, we propose E2URec, the efficient and effective unlearning method
for LLMRec. Our method enables LLMRec to efficiently forget the specific data by only …
for LLMRec. Our method enables LLMRec to efficiently forget the specific data by only …
Large language model unlearning via embedding-corrupted prompts
Large language models (LLMs) have advanced to encompass extensive knowledge across
diverse domains. Yet controlling what a large language model should not know is important …
diverse domains. Yet controlling what a large language model should not know is important …
Pre-text: Training language models on private federated data in the age of llms
On-device training is currently the most common approach for training machine learning
(ML) models on private, distributed user data. Despite this, on-device training has several …
(ML) models on private, distributed user data. Despite this, on-device training has several …
Unlocking memorization in large language models with dynamic soft prompting
Pretrained large language models (LLMs) have revolutionized natural language processing
(NLP) tasks such as summarization, question answering, and translation. However, LLMs …
(NLP) tasks such as summarization, question answering, and translation. However, LLMs …