Documents
Poster
FastGAT: Simple and Efficient Graph Attention Neural Network with Global-aware Adaptive Computational Node Attention
- DOI:
- 10.60864/znj0-tz35
- Citation Author(s):
- Submitted by:
- Shenzhi Yang
- Last updated:
- 6 June 2024 - 10:28am
- Document Type:
- Poster
- Document Year:
- 2024
- Event:
- Presenters:
- Shenzhi Yang
- Categories:
- Keywords:
- Log in to post comments
Graph attention neural network (GAT) stands as a fundamental model within graph neural networks, extensively employed across various applications. It assigns different weights to different nodes for feature aggregation by comparing the similarity of features between nodes. However, as the amount and density of graph data increases, GAT's computational demands rise steeply. In response, we present FastGAT, a simpler and more efficient graph attention neural network with global-aware adaptive computational node attention. FastGAT assigns a trainable attention weight to each node and updates it adaptively. Experiments show that FastGAT reduces training time on eight public datasets by 6.22\% to 19.50\% while maintaining the same performance. The code is available via https://github.com/szYang2000/FastGAT.