DMGAE: An interpretable representation learning method for directed scale-free networks based on autoencoder and masking

QC Yang, K Yang, ZL Hu, M Li - Information Processing & Management, 2025 - Elsevier
Although existing graph self-supervised learning approaches have paid attention to the
directed nature of networks, they have often overlooked the ubiquitous scale-free attributes …

A Survey on Self-Supervised Pre-Training of Graph Foundation Models: A Knowledge-Based Perspective

Z Zhao, Y Li, Y Zou, R Li, R Zhang - arxiv preprint arxiv:2403.16137, 2024 - arxiv.org
Graph self-supervised learning is now a go-to method for pre-training graph foundation
models, including graph neural networks, graph transformers, and more recent large …

Hi-GMAE: Hierarchical Graph Masked Autoencoders

C Liu, Z Yao, Y Zhan, X Ma, D Tao, J Wu, W Hu… - arxiv preprint arxiv …, 2024 - arxiv.org
Graph Masked Autoencoders (GMAEs) have emerged as a notable self-supervised learning
approach for graph-structured data. Existing GMAE models primarily focus on reconstructing …

Disentangled Generative Graph Representation Learning

X Hu, Z Duan, X Liu, Y Li, B Chen, M Zhou - arxiv preprint arxiv …, 2024 - arxiv.org
Recently, generative graph models have shown promising results in learning graph
representations through self-supervised methods. However, most existing generative graph …

Hierarchical Vector Quantized Graph Autoencoder with Annealing-Based Code Selection

L Zeng, J Yu, J Zhu, Q Zhong, X Li - THE WEB CONFERENCE 2025 - openreview.net
Graph self-supervised learning has gained significant attention recently. However, many
existing approaches heavily depend on perturbations, and inappropriate perturbations may …