Authors: Jeremy M. Cohen, Alex Damian, Ameet Talwalkar, Zico Kolter, Jason D. Lee
Abstract: Optimization in deep learning remains poorly understood, even in the simple
setting of deterministic (i.e. full-batch) training. A key difficulty is that
much of an optimizer’s behavior is implicitly determined by complex oscillatory
dynamics, referred to as the “edge of stability.” The main contribution of this
paper is to show that an optimizer’s implicit behavior can be explicitly
captured by a “central flow:” a differential equation which models the
time-averaged optimization trajectory. We show that these flows can empirically
predict long-term optimization trajectories of generic neural networks with a
high degree of numerical accuracy. By interpreting these flows, we reveal for
the first time 1) the precise sense in which RMSProp adapts to the local loss
landscape, and 2) an “acceleration via regularization” mechanism, wherein
adaptive optimizers implicitly navigate towards low-curvature regions in which
they can take larger steps. This mechanism is key to the efficacy of these
adaptive optimizers. Overall, we believe that central flows constitute a
promising tool for reasoning about optimization in deep learning.
Source: http://arxiv.org/abs/2410.24206v1