Does graph distillation see like vision dataset counterpart?
Training on large-scale graphs has achieved remarkable results in graph representation
learning, but its cost and storage have attracted increasing concerns. Existing graph …
learning, but its cost and storage have attracted increasing concerns. Existing graph …
On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm
Contemporary machine learning which involves training large neural networks on massive
datasets faces significant computational challenges. Dataset distillation as a recent …
datasets faces significant computational challenges. Dataset distillation as a recent …
Dataset regeneration for sequential recommendation
The sequential recommender (SR) system is a crucial component of modern recommender
systems, as it aims to capture the evolving preferences of users. Significant efforts have …
systems, as it aims to capture the evolving preferences of users. Significant efforts have …
Dataset distillation by automatic training trajectories
Dataset Distillation is used to create a concise, yet informative, synthetic dataset that can
replace the original dataset for training purposes. Some leading methods in this domain …
replace the original dataset for training purposes. Some leading methods in this domain …
Dataset quantization with active learning based adaptive sampling
Deep learning has made remarkable progress recently, largely due to the availability of
large, well-labeled datasets. However, the training on such datasets elevates costs and …
large, well-labeled datasets. However, the training on such datasets elevates costs and …
Navigating complexity: Toward lossless graph condensation via expanding window matching
Graph condensation aims to reduce the size of a large-scale graph dataset by synthesizing
a compact counterpart without sacrificing the performance of Graph Neural Networks …
a compact counterpart without sacrificing the performance of Graph Neural Networks …
Data distillation can be like vodka: Distilling more times for better quality
Dataset distillation aims to minimize the time and memory needed for training deep networks
on large datasets, by creating a small set of synthetic images that has a similar …
on large datasets, by creating a small set of synthetic images that has a similar …
SelMatch: Effectively scaling up dataset distillation via selection-based initialization and partial updates by trajectory matching
Y Lee, HW Chung - Forty-first International Conference on Machine …, 2024 - openreview.net
Dataset distillation aims to synthesize a small number of images per class (IPC) from a large
dataset to approximate full dataset training with minimal performance loss. While effective in …
dataset to approximate full dataset training with minimal performance loss. While effective in …
ATOM: Attention Mixer for Efficient Dataset Distillation
Recent works in dataset distillation seek to minimize training expenses by generating a
condensed synthetic dataset that encapsulates the information present in a larger real …
condensed synthetic dataset that encapsulates the information present in a larger real …
Exploring the Impact of Dataset Bias on Dataset Distillation
Dataset Distillation (DD) is a promising technique to synthesize a smaller dataset that
preserves essential information from the original dataset. This synthetic dataset can serve as …
preserves essential information from the original dataset. This synthetic dataset can serve as …