5 papers across 3 sessions
Transformers fall short of surpassing the performance ceiling of GNNs, but introducing appropriate constraints can effectively enhance the generalization of GNNs.
We propose a new graph transformer that introduces a novel token swapping operation to generate diverse token sequences to further enhance model performance.
We propose the first label memorization framework for node classification in GNNs and investigate the relationship between memorization and graph/node properties.
We propose a Graph Transformer on pseudo-Riemannian manifolds
A personalized subgraph federated learning framework that learns inter‑client similarity on the fly, enabling adaptive, client‑specific aggregation at the server.