Published: Oct. 19, 2021

Gongguo Tang; Department of Electrical, Computer, and Energy Engineering; University of Colorado Boulder

Geometry and algorithm for some nonconvex optimizations

Great progress has been made in the past few years in our understanding of nonconvex optimizations. In this talk, I will share with you three of our works in this direction. In one, we study low-rank matrix optimization with a general objective function, solved by factoring the matrix variable as the product of two smaller matrices. We characterize the global optimization geometry of the nonconvex factored problem and show that the corresponding objective function satisfies the robust strict saddle property as long as the original objective function satisfies restricted strong convexity and smoothness properties. In two, we recognized that many machine learning problems involve minimizing empirical risk functions with well-behaved population risks. Instead of analyzing the non-convex empirical risk directly, we first study the landscape of the corresponding population risk, which is usually easier to characterize, and then build a connection between the landscape of the empirical risk and its population risk. Lastly, we study the convergence behavior of alternating minimization, a class of algorithms widely used to solve the aforementioned nonconvex optimizations. We show that under mild assumptions on the (nonconvex) objective functions, these algorithms avoid strict saddles and converge to second-order optimal solutions almost surely from random initialization.