Energy-preserving optimization can escape local minima exponentially faster than gradient descent, and quantum versions provide additional speedups—relevant for training models on non-convex landscapes.
This paper analyzes Energy Conserving Descent (ECD), an optimization algorithm that escapes local minima by preserving energy during descent. The authors prove that both a stochastic version and a quantum version achieve exponential speedups over gradient descent on certain non-convex problems, with quantum ECD being fastest for high-barrier objectives.