Everything is connected: Graph neural networks

P Veličković - Current Opinion in Structural Biology, 2023 - Elsevier
In many ways, graphs are the main modality of data we receive from nature. This is due to
the fact that most of the patterns we see, both in natural and artificial systems, are elegantly …

Graph neural networks for materials science and chemistry

P Reiser, M Neubert, A Eberhard, L Torresi… - Communications …, 2022 - nature.com
Abstract Machine learning plays an increasingly important role in many areas of chemistry
and materials science, being used to predict materials properties, accelerate simulations …

Recipe for a general, powerful, scalable graph transformer

L Rampášek, M Galkin, VP Dwivedi… - Advances in …, 2022 - proceedings.neurips.cc
We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer
with linear complexity and state-of-the-art results on a diverse set of benchmarks. Graph …

Nodeformer: A scalable graph structure learning transformer for node classification

Q Wu, W Zhao, Z Li, DP Wipf… - Advances in Neural …, 2022 - proceedings.neurips.cc
Graph neural networks have been extensively studied for learning with inter-connected data.
Despite this, recent evidence has revealed GNNs' deficiencies related to over-squashing …

On over-squashing in message passing neural networks: The impact of width, depth, and topology

F Di Giovanni, L Giusti, F Barbero… - International …, 2023 - proceedings.mlr.press
Abstract Message Passing Neural Networks (MPNNs) are instances of Graph Neural
Networks that leverage the graph to send messages over the edges. This inductive bias …

Long range graph benchmark

VP Dwivedi, L Rampášek, M Galkin… - Advances in …, 2022 - proceedings.neurips.cc
Abstract Graph Neural Networks (GNNs) that are based on the message passing (MP)
paradigm generally exchange information between 1-hop neighbors to build node …

Structure-aware transformer for graph representation learning

D Chen, L O'Bray, K Borgwardt - … Conference on Machine …, 2022 - proceedings.mlr.press
The Transformer architecture has gained growing attention in graph representation learning
recently, as it naturally overcomes several limitations of graph neural networks (GNNs) by …

A survey on oversmoothing in graph neural networks

TK Rusch, MM Bronstein, S Mishra - arxiv preprint arxiv:2303.10993, 2023 - arxiv.org
Node features of graph neural networks (GNNs) tend to become more similar with the
increase of the network depth. This effect is known as over-smoothing, which we …

Graph inductive biases in transformers without message passing

L Ma, C Lin, D Lim, A Romero-Soriano… - International …, 2023 - proceedings.mlr.press
Transformers for graph data are increasingly widely studied and successful in numerous
learning tasks. Graph inductive biases are crucial for Graph Transformers, and previous …

How attentive are graph attention networks?

S Brody, U Alon, E Yahav - arxiv preprint arxiv:2105.14491, 2021 - arxiv.org
Graph Attention Networks (GATs) are one of the most popular GNN architectures and are
considered as the state-of-the-art architecture for representation learning with graphs. In …