Beyond sparsity: compressed sensing with deep generative priors
Over the last several decades, sparsity and compressed sensing has been a dominant theme in signal recovery, leading to significant improvements in MRI imaging and new algorithms in a wide range of fields. We'll begin by seeing some exciting recent algorithms for computer vision inspired by the ideas of compressed sensing. In very recent developments, deep neural network-based generative image priors have been empirically shown to require 10X fewer measurements than traditional compressed sensing in certain scenarios. As deep generative priors improve, analogous improvements in the performance of compressed sensing and other inverse problems may be realized across the imaging sciences. We will discuss a theoretical framework for studying inverse problems subject to deep generative priors. In particular, we prove that with high probability, the nonconvex empirical risk objective for enforcing random deep generative priors subject to compressive random linear observations of the last layer of the generator has no spurious local minima, and that for a fixed network depth, these guarantees hold at order-optimal sample complexity. This result provides a theoretical explanation of the 10X improvement observed empirically.
Thursday, December 7, 2017 at 1:30pm to 2:30pm
Engineering Center, ECCR 257
1111 Engineering Drive, Boulder, CO 80309