Verification of machine unlearning is fragile

B Zhang, Z Chen, C Shen, J Li - arxiv preprint arxiv:2408.00929, 2024 - arxiv.org
As privacy concerns escalate in the realm of machine learning, data owners now have the
option to utilize machine unlearning to remove their data from machine learning models …

Investigating the Feasibility of Mitigating Potential Copyright Infringement via Large Language Model Unlearning

G Dou - arxiv preprint arxiv:2412.18621, 2024 - arxiv.org
Pre-trained Large Language Models (LLMs) have demonstrated remarkable capabilities but
also pose risks by learning and generating copyrighted material, leading to significant legal …

Towards Understanding the Feasibility of Machine Unlearning

M Sarvmaili, H Sajjad, G Wu - arxiv preprint arxiv:2410.03043, 2024 - arxiv.org
In light of recent privacy regulations, machine unlearning has attracted significant attention
in the research community. However, current studies predominantly assess the overall …

Patient-Centered and Practical Privacy to Support AI for Healthcare

R Liu, HK Lee, SV Bhavani, X Jiang… - 2024 IEEE 6th …, 2024 - ieeexplore.ieee.org
The increasing integration of artificial intelligence (AI) in healthcare holds great promise for
enhancing patient care through predictive modeling and clinical decision support. However …

Rewind-to-delete: Certified machine unlearning for nonconvex functions

S Mu, D Klabjan - arxiv preprint arxiv:2409.09778, 2024 - arxiv.org
Machine unlearning algorithms aim to efficiently remove data from a model without
retraining it from scratch, in order to enforce data privacy, remove corrupted or outdated …

Contrastive unlearning: A contrastive approach to machine unlearning

H kyu Lee, Q Zhang, C Yang, J Lou, L **ong - 2024 - openreview.net
Machine unlearning aims to eliminate the influence of a subset of training samples (ie,
unlearning samples) from a trained model. Effectively and efficiently removing the …