Embedding dimension significantly impacts stability in graph neural networks, but the relationship is method-specific and doesn't always correlate with task performance, requiring careful hyperparameter tuning.
This paper investigates how embedding dimension affects the stability of node embeddings across five popular graph methods. The researchers find that stability varies unpredictably with dimension size—some methods improve with higher dimensions while others don't—and that the most stable embeddings don't always perform best on downstream tasks.