Does graph distillation see like vision dataset counterpart?

B Yang, K Wang, Q Sun, C Ji, X Fu… - Advances in …, 2023 - proceedings.neurips.cc
Training on large-scale graphs has achieved remarkable results in graph representation
learning, but its cost and storage have attracted increasing concerns. Existing graph …

On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm

P Sun, B Shi, D Yu, T Lin - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Contemporary machine learning which involves training large neural networks on massive
datasets faces significant computational challenges. Dataset distillation as a recent …

Dataset regeneration for sequential recommendation

M Yin, H Wang, W Guo, Y Liu, S Zhang… - Proceedings of the 30th …, 2024 - dl.acm.org
The sequential recommender (SR) system is a crucial component of modern recommender
systems, as it aims to capture the evolving preferences of users. Significant efforts have …

Dataset distillation by automatic training trajectories

D Liu, J Gu, H Cao, C Trinitis, M Schulz - European Conference on …, 2024 - Springer
Dataset Distillation is used to create a concise, yet informative, synthetic dataset that can
replace the original dataset for training purposes. Some leading methods in this domain …

Dataset quantization with active learning based adaptive sampling

Z Zhao, Y Shang, J Wu, Y Yan - European Conference on Computer …, 2024 - Springer
Deep learning has made remarkable progress recently, largely due to the availability of
large, well-labeled datasets. However, the training on such datasets elevates costs and …

Navigating complexity: Toward lossless graph condensation via expanding window matching

Y Zhang, T Zhang, K Wang, Z Guo, Y Liang… - arxiv preprint arxiv …, 2024 - arxiv.org
Graph condensation aims to reduce the size of a large-scale graph dataset by synthesizing
a compact counterpart without sacrificing the performance of Graph Neural Networks …

Data distillation can be like vodka: Distilling more times for better quality

X Chen, Y Yang, Z Wang, B Mirzasoleiman - arxiv preprint arxiv …, 2023 - arxiv.org
Dataset distillation aims to minimize the time and memory needed for training deep networks
on large datasets, by creating a small set of synthetic images that has a similar …

SelMatch: Effectively scaling up dataset distillation via selection-based initialization and partial updates by trajectory matching

Y Lee, HW Chung - Forty-first International Conference on Machine …, 2024 - openreview.net
Dataset distillation aims to synthesize a small number of images per class (IPC) from a large
dataset to approximate full dataset training with minimal performance loss. While effective in …

ATOM: Attention Mixer for Efficient Dataset Distillation

S Khaki, A Sajedi, K Wang, LZ Liu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Recent works in dataset distillation seek to minimize training expenses by generating a
condensed synthetic dataset that encapsulates the information present in a larger real …

Exploring the Impact of Dataset Bias on Dataset Distillation

Y Lu, J Gu, X Chen, S Vahidian… - Proceedings of the …, 2024 - openaccess.thecvf.com
Dataset Distillation (DD) is a promising technique to synthesize a smaller dataset that
preserves essential information from the original dataset. This synthetic dataset can serve as …