Graph neural networks for temporal graphs: State of the art, open challenges, and opportunities

A Longa, V Lachi, G Santin, M Bianchini, B Lepri… - arxiv preprint arxiv …, 2023 - arxiv.org
Graph Neural Networks (GNNs) have become the leading paradigm for learning on (static)
graph-structured data. However, many real-world systems are dynamic in nature, since the …

Advances in variational inference

C Zhang, J Bütepage, H Kjellström… - IEEE transactions on …, 2018 - ieeexplore.ieee.org
Many modern unsupervised or semi-supervised machine learning algorithms rely on
Bayesian probabilistic models. These models are usually intractable and thus require …

Foundations and modeling of dynamic networks using dynamic graph neural networks: A survey

J Skarding, B Gabrys, K Musial - iEEE Access, 2021 - ieeexplore.ieee.org
Dynamic networks are used in a wide range of fields, including social network analysis,
recommender systems and epidemiology. Representing complex networks as structures …

Virtual adversarial training: a regularization method for supervised and semi-supervised learning

T Miyato, S Maeda, M Koyama… - IEEE transactions on …, 2018 - ieeexplore.ieee.org
We propose a new regularization method based on virtual adversarial loss: a new measure
of local smoothness of the conditional label distribution given input. Virtual adversarial loss …

Variational graph recurrent neural networks

E Hajiramezanali, A Hasanzadeh… - Advances in neural …, 2019 - proceedings.neurips.cc
Abstract Representation learning over graph structured data has been mostly studied in
static graph settings while efforts for modeling dynamic graphs are still scant. In this paper …

[PDF][PDF] Truncated diffusion probabilistic models

H Zheng, P He, W Chen, M Zhou - arxiv preprint arxiv:2202.09671, 2022 - academia.edu
Employing a forward Markov diffusion chain to gradually map the data to a noise distribution,
diffusion probabilistic models learn how to generate the data by inferring a reverse Markov …

Score identity distillation: Exponentially fast distillation of pretrained diffusion models for one-step generation

M Zhou, H Zheng, Z Wang, M Yin… - Forty-first International …, 2024 - openreview.net
We introduce Score identity Distillation (SiD), an innovative data-free method that distills the
generative capabilities of pretrained diffusion models into a single-step generator. SiD not …

Semi-implicit graph variational auto-encoders

A Hasanzadeh, E Hajiramezanali… - Advances in neural …, 2019 - proceedings.neurips.cc
Semi-implicit graph variational auto-encoder (SIG-VAE) is proposed to expand the flexibility
of variational graph auto-encoders (VGAE) to model graph data. SIG-VAE employs a …

Learning on attribute-missing graphs

X Chen, S Chen, J Yao, H Zheng… - IEEE transactions on …, 2020 - ieeexplore.ieee.org
Graphs with complete node attributes have been widely explored recently. While in practice,
there is a graph where attributes of only partial nodes could be available and those of the …

Seegera: Self-supervised semi-implicit graph variational auto-encoders with masking

X Li, T Ye, C Shan, D Li, M Gao - … of the ACM web conference 2023, 2023 - dl.acm.org
Generative graph self-supervised learning (SSL) aims to learn node representations by
reconstructing the input graph data. However, most existing methods focus on unsupervised …