Beating Perils of Non-convexity:Guaranteed Training of Neural Networks using Tensor Methods: Neural networks have revolutionized performance across multiple domains such as computer vision and speech recognition. However, training a neural network is a highly non-convex problem and the conventional stochastic gradient descent can get stuck in spurious local optima. We propose a computationally efficient method for training neural networks that also has guaranteed risk bounds. It is based on tensor decomposition which is guaranteed to converge to the globally optimal solution under mild conditions. We explain how this framework can be leveraged to train feedforward and recurrent neural networks.
Session Summary
Beating Perils of Non-convexity
MLconf 2017 Seattle
Hanie Sedghi
Google Brain
Senior Research Scientist
Learn more »