Instead of clustering users during training (vulnerable to noisy labels), group them upfront using feature covariance structure, then fix label errors by checking if examples align with learned feature subspaces.
FB-NLL tackles noisy labels in federated learning by clustering users based on feature geometry rather than training dynamics, then correcting mislabeled data using feature alignment. This one-shot approach avoids the communication overhead of iterative methods while handling low-quality data that typically corrupts personalized federated learning.