Inventor(s)

Abstract

Graph Neural Networks (GNNs) have shown great promise in various graph-based tasks. However, their performance often suffers when applied to heterophilous graphs, where connected nodes exhibit dissimilar features. In this paper, we introduce the Heterophily-Aware Graph Transformer (HAGT), a novel GNN architecture that leverages an adaptive subgraph sampling technique with graph Transformer layers to effectively learn from heterophilous graphs. HAGT employs relational attention to capture complex node interactions and to effectively aggregate information. We present a detailed algorithm and implementation of HAGT using PyTorch. Experimental results on several benchmark heterophilous datasets show that HAGT outperforms standard GNNs and state-of-the-art heterophilyfocused methods.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS