A comprehensive survey of continual learning: theory, method and application

L Wang, X Zhang, H Su, J Zhu - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …

Scaling & shifting your features: A new baseline for efficient model tuning

D Lian, D Zhou, J Feng, X Wang - Advances in Neural …, 2022 - proceedings.neurips.cc
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-
tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers …

Slimmable dataset condensation

S Liu, J Ye, R Yu, X Wang - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Dataset distillation, also known as dataset condensation, aims to compress a large dataset
into a compact synthetic one. Existing methods perform dataset condensation by assuming a …

Machine unlearning: A comprehensive survey

W Wang, Z Tian, C Zhang, S Yu - arxiv preprint arxiv:2405.07406, 2024 - arxiv.org
As the right to be forgotten has been legislated worldwide, many studies attempt to design
unlearning mechanisms to protect users' privacy when they want to leave machine learning …

Detecting pretraining data from large language models

W Shi, A Ajith, M **a, Y Huang, D Liu, T Blevins… - arxiv preprint arxiv …, 2023 - arxiv.org
Although large language models (LLMs) are widely deployed, the data used to train them is
rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but …

Deep graph reprogramming

Y **g, C Yuan, L Ju, Y Yang… - Proceedings of the …, 2023 - openaccess.thecvf.com
In this paper, we explore a novel model reusing task tailored for graph neural networks
(GNNs), termed as" deep graph reprogramming". We strive to reprogram a pre-trained GNN …

Muse: Machine unlearning six-way evaluation for language models

W Shi, J Lee, Y Huang, S Malladi, J Zhao… - arxiv preprint arxiv …, 2024 - arxiv.org
Language models (LMs) are trained on vast amounts of text data, which may include private
and copyrighted content. Data owners may request the removal of their data from a trained …

Model sparsity can simplify machine unlearning

J Liu, P Ram, Y Yao, G Liu, Y Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
In response to recent data regulation requirements, machine unlearning (MU) has emerged
as a critical process to remove the influence of specific examples from a given model …

Muter: Machine unlearning on adversarially trained models

J Liu, M Xue, J Lou, X Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Machine unlearning is an emerging task of removing the influence of selected
training datapoints from a trained model upon data deletion requests, which echoes the …

Machine unlearning: Solutions and challenges

J Xu, Z Wu, C Wang, X Jia - IEEE Transactions on Emerging …, 2024 - ieeexplore.ieee.org
Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious
data, posing risks of privacy breaches, security vulnerabilities, and performance …