Representation learning in neural nets continues to play a fundamental role in advancing our understanding of deep learning algorithms and our ability to extend successful applications. In this session we will explore how the information bottleneck analysis of deep learning algorithms sheds insight into how these algorithms learn and patterns across layers of learned representations. We conclude with discussion of how this analysis sheds a more practical light on theoretical concepts in deep learning research such as nuisance insensitivity and disentanglement.
Session Summary
Understanding Information Flow in Deep Learning Representation
MLconf Online 2021
Mike Tamir
Susquehanna International Group/UC Berkeley
Chief ML Scientist
Learn more »