Knowledge graph embedding: A survey from the perspective of representation spaces

J Cao, J Fang, Z Meng, S Liang - ACM Computing Surveys, 2024 - dl.acm.org
Knowledge graph embedding (KGE) is an increasingly popular technique that aims to
represent entities and relations of knowledge graphs into low-dimensional semantic spaces …

Difformer: Scalable (graph) transformers induced by energy constrained diffusion

Q Wu, C Yang, W Zhao, Y He, D Wipf, J Yan - arxiv preprint arxiv …, 2023 - arxiv.org
Real-world data generation often involves complex inter-dependencies among instances,
violating the IID-data hypothesis of standard learning paradigms and posing a challenge for …

Adversarial robustness in graph neural networks: A Hamiltonian approach

K Zhao, Q Kang, Y Song, R She… - Advances in Neural …, 2023 - proceedings.neurips.cc
Graph neural networks (GNNs) are vulnerable to adversarial perturbations, including those
that affect both node features and graph topology. This paper investigates GNNs derived …

On the unreasonable effectiveness of feature propagation in learning on graphs with missing node features

E Rossi, H Kenlay, MI Gorinova… - Learning on graphs …, 2022 - proceedings.mlr.press
Abstract While Graph Neural Networks (GNNs) have recently become the de facto standard
for modeling relational data, they impose a strong assumption on the availability of the node …

Gread: Graph neural reaction-diffusion networks

J Choi, S Hong, N Park, SB Cho - … Conference on Machine …, 2023 - proceedings.mlr.press
Graph neural networks (GNNs) are one of the most popular research topics for deep
learning. GNN methods typically have been designed on top of the graph signal processing …

Equivariant hypergraph diffusion neural operators

P Wang, S Yang, Y Liu, Z Wang, P Li - arxiv preprint arxiv:2207.06680, 2022 - arxiv.org
Hypergraph neural networks (HNNs) using neural networks to encode hypergraphs provide
a promising way to model higher-order relations in data and further solve relevant prediction …

Mlpinit: Embarrassingly simple gnn training acceleration with mlp initialization

X Han, T Zhao, Y Liu, X Hu, N Shah - arxiv preprint arxiv:2210.00102, 2022 - arxiv.org
Training graph neural networks (GNNs) on large graphs is complex and extremely time
consuming. This is attributed to overheads caused by sparse matrix multiplication, which are …

[PDF][PDF] Graph neural networks as gradient flows: understanding graph convolutions via energy

F Di Giovanni, J Rowbottom… - arxiv preprint arxiv …, 2022 - academia.edu
Gradient flows are differential equations that minimize an energy functional and constitute
the main descriptors of physical systems. We apply this formalism to Graph Neural Networks …

A generalized neural diffusion framework on graphs

Y Li, X Wang, H Liu, C Shi - Proceedings of the AAAI Conference on …, 2024 - ojs.aaai.org
Recent studies reveal the connection between GNNs and the diffusion process, which
motivates many diffusion based GNNs to be proposed. However, since these two …

Improving graph neural networks with learnable propagation operators

M Eliasof, L Ruthotto, E Treister - … Conference on Machine …, 2023 - proceedings.mlr.press
Abstract Graph Neural Networks (GNNs) are limited in their propagation operators. In many
cases, these operators often contain non-negative elements only and are shared across …