A comprehensive survey of continual learning: theory, method and application
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
Scaling & shifting your features: A new baseline for efficient model tuning
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-
tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers …
tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers …
Slimmable dataset condensation
Dataset distillation, also known as dataset condensation, aims to compress a large dataset
into a compact synthetic one. Existing methods perform dataset condensation by assuming a …
into a compact synthetic one. Existing methods perform dataset condensation by assuming a …
Machine unlearning: A comprehensive survey
As the right to be forgotten has been legislated worldwide, many studies attempt to design
unlearning mechanisms to protect users' privacy when they want to leave machine learning …
unlearning mechanisms to protect users' privacy when they want to leave machine learning …
Detecting pretraining data from large language models
Although large language models (LLMs) are widely deployed, the data used to train them is
rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but …
rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but …
Deep graph reprogramming
In this paper, we explore a novel model reusing task tailored for graph neural networks
(GNNs), termed as" deep graph reprogramming". We strive to reprogram a pre-trained GNN …
(GNNs), termed as" deep graph reprogramming". We strive to reprogram a pre-trained GNN …
Muse: Machine unlearning six-way evaluation for language models
Language models (LMs) are trained on vast amounts of text data, which may include private
and copyrighted content. Data owners may request the removal of their data from a trained …
and copyrighted content. Data owners may request the removal of their data from a trained …
Model sparsity can simplify machine unlearning
In response to recent data regulation requirements, machine unlearning (MU) has emerged
as a critical process to remove the influence of specific examples from a given model …
as a critical process to remove the influence of specific examples from a given model …
Muter: Machine unlearning on adversarially trained models
Abstract Machine unlearning is an emerging task of removing the influence of selected
training datapoints from a trained model upon data deletion requests, which echoes the …
training datapoints from a trained model upon data deletion requests, which echoes the …
Machine unlearning: Solutions and challenges
Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious
data, posing risks of privacy breaches, security vulnerabilities, and performance …
data, posing risks of privacy breaches, security vulnerabilities, and performance …