The heterophilic graph learning handbook: Benchmarks, models, theoretical analysis, applications and challenges

S Luan, C Hua, Q Lu, L Ma, L Wu, X Wang… - ar** in reinforcement learning
H Sami, J Bentahar, A Mourad, H Otrok, E Damiani - Information Sciences, 2022 - Elsevier
In this paper, we consider the problem of low-speed convergence in Reinforcement
Learning (RL). As a solution, various potential-based reward sha** techniques were …

Provably efficient offline reinforcement learning with trajectory-wise reward

T Xu, Y Wang, S Zou, Y Liang - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The remarkable success of reinforcement learning (RL) heavily relies on observing the
reward of every visited state-action pair. In many real world applications, however, an agent …

Neural algorithmic reasoners are implicit planners

AI Deac, P Veličković, O Milinkovic… - Advances in …, 2021 - proceedings.neurips.cc
Implicit planning has emerged as an elegant technique for combining learned models of the
world with end-to-end model-free reinforcement learning. We study the class of implicit …

Snowflake: Scaling GNNs to high-dimensional continuous control via parameter freezing

C Blake, V Kurin, M Igl… - Advances in Neural …, 2021 - proceedings.neurips.cc
Recent research has shown that graph neural networks (GNNs) can learn policies for
locomotion control that are as effective as a typical multi-layer perceptron (MLP), with …

On addressing the limitations of graph neural networks

S Luan - arxiv preprint arxiv:2306.12640, 2023 - arxiv.org
Sitao proposal Page 1 On Addressing the Limitations of Graph Neural Networks Sitao Luan1,2
1sitao.luan@maill.mcgill.ca 1McGill University; 2Mila July 4, 2023 Abstract This report gives a …

Reward sha** with hierarchical graph topology

J Sang, Y Wang, W Ding, Z Ahmadkhan, L Xu - Pattern Recognition, 2023 - Elsevier
Reward sha** using GCNs is a popular research area in reinforcement learning.
However, it is difficult to shape potential functions for complicated tasks. In this paper, we …