Poster Session 4 · Thursday, December 4, 2025 4:30 PM → 7:30 PM
#3810
Sketch-Augmented Features Improve Learning Long-Range Dependencies in Graph Neural Networks
Abstract
Graph Neural Networks learn on graph-structured data by iteratively aggregating local neighborhood information.
While this local message passing paradigm imparts a powerful inductive bias and exploits graph sparsity, it also yields three key challenges:
- oversquashing of long-range information,
- oversmoothing of node representations, and
- limited expressive power.
In this work we inject randomized global embeddings of node features, which we term Sketched Random Features, into standard GNNs, enabling them to efficiently capture long-range dependencies. The embeddings are unique, distance-sensitive, and topology-agnostic---properties which we analytically and empirically show alleviate the aforementioned limitations when injected into GNNs.
Experimental results on real-world graph learning tasks confirm that this strategy consistently improves performance over baseline GNNs, offering both a standalone solution and a complementary enhancement to existing techniques such as graph positional encodings.