A survey of machine unlearning

TT Nguyen, TT Huynh, Z Ren, PL Nguyen… - arxiv preprint arxiv …, 2022 - arxiv.org
Today, computer systems hold large amounts of personal data. Yet while such an
abundance of data allows breakthroughs in artificial intelligence, and especially machine …

Muse: Machine unlearning six-way evaluation for language models

W Shi, J Lee, Y Huang, S Malladi, J Zhao… - arxiv preprint arxiv …, 2024 - arxiv.org
Language models (LMs) are trained on vast amounts of text data, which may include private
and copyrighted content. Data owners may request the removal of their data from a trained …

To generate or not? safety-driven unlearned diffusion models are still easy to generate unsafe images... for now

Y Zhang, J Jia, X Chen, A Chen, Y Zhang, J Liu… - … on Computer Vision, 2024 - Springer
The recent advances in diffusion models (DMs) have revolutionized the generation of
realistic and complex images. However, these models also introduce potential safety …

Challenging forgets: Unveiling the worst-case forget sets in machine unlearning

C Fan, J Liu, A Hero, S Liu - European Conference on Computer Vision, 2024 - Springer
The trustworthy machine learning (ML) community is increasingly recognizing the crucial
need for models capable of selectively 'unlearning'data points after training. This leads to the …

Machine unlearning in generative ai: A survey

Z Liu, G Dou, Z Tan, Y Tian, M Jiang - arxiv preprint arxiv:2407.20516, 2024 - arxiv.org
Generative AI technologies have been deployed in many places, such as (multimodal) large
language models and vision generative models. Their remarkable performance should be …

Scissorhands: Scrub data influence via connection sensitivity in networks

J Wu, M Harandi - European Conference on Computer Vision, 2024 - Springer
Abstract Machine unlearning has become a pivotal task to erase the influence of data from a
trained model. It adheres to recent data regulation standards and enhances the privacy and …

CURE4Rec: A benchmark for recommendation unlearning with deeper influence

C Chen, J Zhang, Y Zhang, L Zhang, L Lyu, Y Li… - arxiv preprint arxiv …, 2024 - arxiv.org
With increasing privacy concerns in artificial intelligence, regulations have mandated the
right to be forgotten, granting individuals the right to withdraw their data from models …

Replication in visual diffusion models: A survey and outlook

W Wang, Y Sun, Z Yang, Z Hu, Z Tan… - arxiv preprint arxiv …, 2024 - arxiv.org
Visual diffusion models have revolutionized the field of creative AI, producing high-quality
and diverse content. However, they inevitably memorize training images or videos …

WAGLE: Strategic weight attribution for effective and modular unlearning in large language models

J Jia, J Liu, Y Zhang, P Ram, N Baracaldo… - arxiv preprint arxiv …, 2024 - arxiv.org
The need for effective unlearning mechanisms in large language models (LLMs) is
increasingly urgent, driven by the necessity to adhere to data regulations and foster ethical …

Efficient backdoor defense in multimodal contrastive learning: A token-level unlearning method for mitigating threats

K Liu, S Liang, J Liang, P Dai, X Cao - arxiv preprint arxiv:2409.19526, 2024 - arxiv.org
Multimodal contrastive learning uses various data modalities to create high-quality features,
but its reliance on extensive data sources on the Internet makes it vulnerable to backdoor …