Global self-attention as a replacement for graph convolution
We propose an extension to the transformer neural network architecture for general-purpose
graph learning by adding a dedicated pathway for pairwise structural information, called …
graph learning by adding a dedicated pathway for pairwise structural information, called …
[PDF][PDF] Edge-augmented graph transformers: Global self-attention is enough for graphs
Transformer neural networks have achieved state-of-the-art results for unstructured data
such as text and images but their adoption for graph-structured data has been limited. This is …
such as text and images but their adoption for graph-structured data has been limited. This is …
Depth estimation using feature pyramid U-net and polarized self-attention for road scenes
B Tao, Y Shen, X Tong, D Jiang, B Chen - Photonics, 2022 - mdpi.com
Studies have shown that the observed image texture details and semantic information are of
great significance for the depth estimation on the road scenes. However, there are …
great significance for the depth estimation on the road scenes. However, there are …
Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers
Graph transformers typically lack direct pair-to-pair communication, instead forcing
neighboring pairs to exchange information via a common node. We propose the Triplet …
neighboring pairs to exchange information via a common node. We propose the Triplet …
[PDF][PDF] Skeleton action recognition via graph convolutional network with self-attention module
M Li, K Chen, Y Bai, J Pei - Electronic Research Archive, 2024 - aimspress.com
Skeleton-based action recognition is an important but challenging task in the study of video
understanding and human-computer interaction. However, existing methods suffer from two …
understanding and human-computer interaction. However, existing methods suffer from two …