Competition Winning Learning Rates: It is well known that learning rates are the most important hyper-parameter to tune for training deep neural networks. Surprisingly, training with dynamic learning rates can lead to an order of magnitude speedup in training time. This talk will discuss my path from static learning rates to dynamic cyclical learning rates and finally to fast training with very large learning rates (I named this technique “super-convergence”). In particular, I will show that very large learning rates are the preferred method for regularizing the training because they provide the twin benefits of training speed and good generalization. The super-convergence method was integrated into the fast.ai library and the Fastai team used it to win the DAWNBench and Kaggle’s iMaterialist challenges.