Skip to yearly menu bar Skip to main content


Poster

An Empirical Study of Realized GNN Expressiveness

Yanbo Wang · Muhan Zhang


Abstract: Research on the theoretical expressiveness of Graph Neural Networks~(GNNs) has developed rapidly, and many methods have been proposed to enhance the expressiveness. However, most methods do not have a uniform expressiveness measure except for a few that strictly follow the $k$-dimensional Weisfeiler-Lehman ($k$-WL) test hierarchy. Their theoretical analyses are often limited to distinguishing certain families of non-isomorphic graphs, leading to difficulties in quantitatively comparing their expressiveness. In contrast to theoretical analysis, another way to measure expressiveness is by evaluating model performance empirically with 1-WL-indistinguishable graphs. Previous datasets face problems with difficulty (any model surpassing 1-WL has nearly 100\% accuracy), granularity (models tend to be either 100\% correct or near random guess), and scale (only several essentially different graphs involved). To address these limitations, we restudied the realized expressive power of different expressive GNN models on a new expressiveness dataset, BREC, which poses greater difficulty (with up to 4-WL-indistinguishable graphs), finer granularity (can compare models between 1-WL and 3-WL), a larger scale (800 1-WL-indistinguishable graphs non-isomorphic to each other).%, and a more reliable evaluation result (with controllable error rate). We synthetically test 23 models with higher-than-1-WL expressiveness on BREC. Our experiment gives the first thorough measurement of the realized expressiveness of those state-of-the-art beyond-1-WL GNN models and reveals the gap between theoretical and realized expressiveness. Dataset and evaluation codes are released at: https://github.com/brec-icml2024/brec-icml2024.

Live content is unavailable. Log in and register to view live content