Dataset distillation: A comprehensive review

R Yu, S Liu, X Wang - IEEE transactions on pattern analysis …, 2023‏ - ieeexplore.ieee.org
Recent success of deep learning is largely attributed to the sheer amount of data used for
training deep neural networks. Despite the unprecedented success, the massive data …

A comprehensive survey of dataset distillation

S Lei, D Tao - IEEE Transactions on Pattern Analysis and …, 2023‏ - ieeexplore.ieee.org
Deep learning technology has developed unprecedentedly in the last decade and has
become the primary choice in many application domains. This progress is mainly attributed …

Dataset quantization

D Zhou, K Wang, J Gu, X Peng, D Lian… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
State-of-the-art deep neural networks are trained with large amounts (millions or even
billions) of data. The expensive computation and memory costs make it difficult to train them …

Dream: Efficient dataset distillation by representative matching

Y Liu, J Gu, K Wang, Z Zhu… - Proceedings of the …, 2023‏ - openaccess.thecvf.com
Dataset distillation aims to synthesize small datasets with little information loss from original
large-scale ones for reducing storage and training costs. Recent state-of-the-art methods …

On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm

P Sun, B Shi, D Yu, T Lin - … of the IEEE/CVF Conference on …, 2024‏ - openaccess.thecvf.com
Contemporary machine learning which involves training large neural networks on massive
datasets faces significant computational challenges. Dataset distillation as a recent …

Slimmable dataset condensation

S Liu, J Ye, R Yu, X Wang - … of the IEEE/CVF Conference on …, 2023‏ - openaccess.thecvf.com
Dataset distillation, also known as dataset condensation, aims to compress a large dataset
into a compact synthetic one. Existing methods perform dataset condensation by assuming a …

Towards lossless dataset distillation via difficulty-aligned trajectory matching

Z Guo, K Wang, G Cazenavette, H Li, K Zhang… - arxiv preprint arxiv …, 2023‏ - arxiv.org
The ultimate goal of Dataset Distillation is to synthesize a small synthetic dataset such that a
model trained on this synthetic set will perform equally well as a model trained on the full …

Sequential subset matching for dataset distillation

J Du, Q Shi, JT Zhou - Advances in Neural Information …, 2023‏ - proceedings.neurips.cc
Dataset distillation is a newly emerging task that synthesizes a small-size dataset used in
training deep neural networks (DNNs) for reducing data storage and model training costs …

Efficient dataset distillation via minimax diffusion

J Gu, S Vahidian, V Kungurtsev… - Proceedings of the …, 2024‏ - openaccess.thecvf.com
Dataset distillation reduces the storage and computational consumption of training a
network by generating a small surrogate dataset that encapsulates rich information of the …

D^ 4: Dataset Distillation via Disentangled Diffusion Model

D Su, J Hou, W Gao, Y Tian… - Proceedings of the IEEE …, 2024‏ - openaccess.thecvf.com
Dataset distillation offers a lightweight synthetic dataset for fast network training with
promising test accuracy. To imitate the performance of the original dataset most approaches …