When training neural networks on multiscale physics problems, gradient conflicts between different regimes can cause training failure—HRGrad fixes this by explicitly managing gradient directions to keep all objectives aligned during optimization.
This paper introduces HRGrad, a method for training neural networks on physics problems that span multiple scales—from microscopic to macroscopic behavior. The key challenge is that different scales pull the network in conflicting directions during training.