You can diagnose what graph datasets require and why GNNs work by replacing learned message passing with interpretable signal components—this white-box approach is competitive with black-box models while revealing which graph properties (smoothing, raw features, class geometry) matter most.
This paper introduces WG-SRC, a transparent method for understanding what graph neural networks learn on node classification tasks.