Dataset distillation: A comprehensive review

R Yu, S Liu, X Wang - IEEE Transactions on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Recent success of deep learning is largely attributed to the sheer amount of data used for
training deep neural networks. Despite the unprecedented success, the massive data …

A comprehensive survey of dataset distillation

S Lei, D Tao - IEEE Transactions on Pattern Analysis and …, 2023 - ieeexplore.ieee.org
Deep learning technology has developed unprecedentedly in the last decade and has
become the primary choice in many application domains. This progress is mainly attributed …

Slimmable dataset condensation

S Liu, J Ye, R Yu, X Wang - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Dataset distillation, also known as dataset condensation, aims to compress a large dataset
into a compact synthetic one. Existing methods perform dataset condensation by assuming a …

Dataset quantization

D Zhou, K Wang, J Gu, X Peng, D Lian… - Proceedings of the …, 2023 - openaccess.thecvf.com
State-of-the-art deep neural networks are trained with large amounts (millions or even
billions) of data. The expensive computation and memory costs make it difficult to train them …

Dream: Efficient dataset distillation by representative matching

Y Liu, J Gu, K Wang, Z Zhu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Dataset distillation aims to synthesize small datasets with little information loss from original
large-scale ones for reducing storage and training costs. Recent state-of-the-art methods …

On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm

P Sun, B Shi, D Yu, T Lin - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Contemporary machine learning which involves training large neural networks on massive
datasets faces significant computational challenges. Dataset distillation as a recent …

You only condense once: Two rules for pruning condensed datasets

Y He, L **ao, JT Zhou - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Dataset condensation is a crucial tool for enhancing training efficiency by reducing the size
of the training dataset, particularly in on-device scenarios. However, these scenarios have …

Towards lossless dataset distillation via difficulty-aligned trajectory matching

Z Guo, K Wang, G Cazenavette, H Li, K Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
The ultimate goal of Dataset Distillation is to synthesize a small synthetic dataset such that a
model trained on this synthetic set will perform equally well as a model trained on the full …

Sequential subset matching for dataset distillation

J Du, Q Shi, JT Zhou - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Dataset distillation is a newly emerging task that synthesizes a small-size dataset used in
training deep neural networks (DNNs) for reducing data storage and model training costs …

Generalized large-scale data condensation via various backbone and statistical matching

S Shao, Z Yin, M Zhou, X Zhang… - Proceedings of the …, 2024 - openaccess.thecvf.com
The lightweight" local-match-global" matching introduced by SRe2L successfully creates a
distilled dataset with comprehensive information on the full 224x224 ImageNet-1k. However …