Graph condensation: A survey

X Gao, J Yu, T Chen, G Ye, W Zhang… - IEEE Transactions on …, 2025 - ieeexplore.ieee.org
The rapid growth of graph data poses significant challenges in storage, transmission, and
particularly the training of graph neural networks (GNNs). To address these challenges …

Dataset condensation for recommendation

J Wu, W Fan, J Chen, S Liu, Q Liu, R He, Q Li… - arxiv preprint arxiv …, 2023 - arxiv.org
Training recommendation models on large datasets requires significant time and resources.
It is desired to construct concise yet informative datasets for efficient training. Recent …

Tinygraph: joint feature and node condensation for graph neural networks

Y Liu, Y Shen - arxiv preprint arxiv:2407.08064, 2024 - arxiv.org
Training graph neural networks (GNNs) on large-scale graphs can be challenging due to the
high computational expense caused by the massive number of nodes and high-dimensional …

Condensing Pre-Augmented Recommendation Data via Lightweight Policy Gradient Estimation

J Wu, W Fan, J Chen, S Liu, Q Liu, R He… - … on Knowledge and …, 2024 - ieeexplore.ieee.org
Training recommendation models on large datasets requires significant time and resources.
It is desired to construct concise yet informative datasets for efficient training. Recent …

Random Walk Guided Hyperbolic Graph Distillation

Y Long, L Xu, S Schoepf, A Brintrup - arxiv preprint arxiv:2501.15696, 2025 - arxiv.org
Graph distillation (GD) is an effective approach to extract useful information from large-scale
network structures. However, existing methods, which operate in Euclidean space to …