Published: April 2, 2021
Andrey Zhmoginov, Research Software Engineer, Google AI

Image understanding and image-to-image translation through the lens of information loss

The computation performed by a deep neural network is typically composed of multiple processing stages during which the information contained in the model input gradually “dissipates” as different areas of the input space end up being mapped to the same output values. This seemingly simple observation provides a useful perspective for designing and understanding computation performed by various deep learning models from convolutional networks used in image classification and segmentation to recurrent neural networks and generative models. In this talk, we will review three such examples. First, we discuss the design of the MobileNetV2 model and the properties of the expansion layer that plays the central role in this architecture. In another example, we will look at the CycleGAN model and discuss the unexpected properties that emerge as a result of using “cyclic consistency loss” for training it. Finally, we discuss the information bottleneck approach and show how this formalism can be used to identify salient regions in images.


BIO: PhD in Astrophysics from Princeton University (2012). Postdoctoral researcher at UC Berkeley (Physics department) from 2012 to 2015. At Google AI from 2015.