On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm

P Sun, B Shi, D Yu, T Lin - … of the IEEE/CVF Conference on …, 2024‏ - openaccess.thecvf.com
Contemporary machine learning which involves training large neural networks on massive
datasets faces significant computational challenges. Dataset distillation as a recent …

Does graph distillation see like vision dataset counterpart?

B Yang, K Wang, Q Sun, C Ji, X Fu… - Advances in …, 2023‏ - proceedings.neurips.cc
Training on large-scale graphs has achieved remarkable results in graph representation
learning, but its cost and storage have attracted increasing concerns. Existing graph …

Dataset regeneration for sequential recommendation

M Yin, H Wang, W Guo, Y Liu, S Zhang… - Proceedings of the 30th …, 2024‏ - dl.acm.org
The sequential recommender (SR) system is a crucial component of modern recommender
systems, as it aims to capture the evolving preferences of users. Significant efforts have …

Dataset distillation by automatic training trajectories

D Liu, J Gu, H Cao, C Trinitis, M Schulz - European Conference on …, 2024‏ - Springer
Dataset Distillation is used to create a concise, yet informative, synthetic dataset that can
replace the original dataset for training purposes. Some leading methods in this domain …

Navigating complexity: Toward lossless graph condensation via expanding window matching

Y Zhang, T Zhang, K Wang, Z Guo, Y Liang… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Graph condensation aims to reduce the size of a large-scale graph dataset by synthesizing
a compact counterpart without sacrificing the performance of Graph Neural Networks …

ATOM: attention mixer for efficient dataset distillation

S Khaki, A Sajedi, K Wang, LZ Liu… - Proceedings of the …, 2024‏ - openaccess.thecvf.com
Recent works in dataset distillation seek to minimize training expenses by generating a
condensed synthetic dataset that encapsulates the information present in a larger real …

Exploring the impact of dataset bias on dataset distillation

Y Lu, J Gu, X Chen, S Vahidian… - Proceedings of the …, 2024‏ - openaccess.thecvf.com
Dataset Distillation (DD) is a promising technique to synthesize a smaller dataset that
preserves essential information from the original dataset. This synthetic dataset can serve as …

Generative dataset distillation: Balancing global structure and local details

L Li, G Li, R Togo, K Maeda… - Proceedings of the …, 2024‏ - openaccess.thecvf.com
In this paper we propose a new dataset distillation method that considers balancing global
structure and local details when distilling the information from a large dataset into a …

Data distillation can be like vodka: Distilling more times for better quality

X Chen, Y Yang, Z Wang, B Mirzasoleiman - arxiv preprint arxiv …, 2023‏ - arxiv.org
Dataset distillation aims to minimize the time and memory needed for training deep networks
on large datasets, by creating a small set of synthetic images that has a similar …

Color-oriented redundancy reduction in dataset distillation

B Yuan, Z Wang, M Baktashmotlagh… - Advances in Neural …, 2025‏ - proceedings.neurips.cc
Dataset Distillation (DD) is designed to generate condensed representations of extensive
image datasets, enhancing training efficiency. Despite recent advances, there remains …