A survey of machine unlearning
Today, computer systems hold large amounts of personal data. Yet while such an
abundance of data allows breakthroughs in artificial intelligence, and especially machine …
abundance of data allows breakthroughs in artificial intelligence, and especially machine …
Muse: Machine unlearning six-way evaluation for language models
Language models (LMs) are trained on vast amounts of text data, which may include private
and copyrighted content. Data owners may request the removal of their data from a trained …
and copyrighted content. Data owners may request the removal of their data from a trained …
To generate or not? safety-driven unlearned diffusion models are still easy to generate unsafe images... for now
The recent advances in diffusion models (DMs) have revolutionized the generation of
realistic and complex images. However, these models also introduce potential safety …
realistic and complex images. However, these models also introduce potential safety …
Challenging forgets: Unveiling the worst-case forget sets in machine unlearning
The trustworthy machine learning (ML) community is increasingly recognizing the crucial
need for models capable of selectively 'unlearning'data points after training. This leads to the …
need for models capable of selectively 'unlearning'data points after training. This leads to the …
Machine unlearning in generative ai: A survey
Generative AI technologies have been deployed in many places, such as (multimodal) large
language models and vision generative models. Their remarkable performance should be …
language models and vision generative models. Their remarkable performance should be …
Scissorhands: Scrub data influence via connection sensitivity in networks
Abstract Machine unlearning has become a pivotal task to erase the influence of data from a
trained model. It adheres to recent data regulation standards and enhances the privacy and …
trained model. It adheres to recent data regulation standards and enhances the privacy and …
CURE4Rec: A benchmark for recommendation unlearning with deeper influence
With increasing privacy concerns in artificial intelligence, regulations have mandated the
right to be forgotten, granting individuals the right to withdraw their data from models …
right to be forgotten, granting individuals the right to withdraw their data from models …
Replication in visual diffusion models: A survey and outlook
Visual diffusion models have revolutionized the field of creative AI, producing high-quality
and diverse content. However, they inevitably memorize training images or videos …
and diverse content. However, they inevitably memorize training images or videos …
WAGLE: Strategic weight attribution for effective and modular unlearning in large language models
The need for effective unlearning mechanisms in large language models (LLMs) is
increasingly urgent, driven by the necessity to adhere to data regulations and foster ethical …
increasingly urgent, driven by the necessity to adhere to data regulations and foster ethical …
Efficient backdoor defense in multimodal contrastive learning: A token-level unlearning method for mitigating threats
Multimodal contrastive learning uses various data modalities to create high-quality features,
but its reliance on extensive data sources on the Internet makes it vulnerable to backdoor …
but its reliance on extensive data sources on the Internet makes it vulnerable to backdoor …