Skip to yearly menu bar Skip to main content


Poster

Enhancing Size Generalization in Graph Neural Networks through Disentangled Representation Learning

Zheng Huang · Qihui Yang · Dawei Zhou · Yujun Yan


Abstract:

Although most graph neural networks (GNNs) can operate on graphs of any size, their classification performance often declines on graphs larger than those encountered during training. Existing methods insufficiently address the removal of size information from graph representations, resulting in sub-optimal performance and reliance on backbone models. In response, we propose DISGEN, a novel and model-agnostic framework designed to disentangle size factors from graph representations. DISGEN employs an augmentation strategy and introduces a decoupling loss that minimizes shared information in hidden representations, with theoretical guarantees for its efficacy. Our empirical results show that DISGEN outperforms the state-of-the-art models by up to 7% on real-world datasets, underscoring its effectiveness in enhancing the size generalizability of GNNs.

Live content is unavailable. Log in and register to view live content