Published: Sept. 25, 2018

Online Optimization with Feedback

The talk focuses on the design and analysis of running (i.e., online) algorithmic solutions to control systems or networked systems based on performance objectives and engineering constraints that may evolve over time. The time-varying convex optimization formalism is leveraged to model optimal operational trajectories of the systems, as well as explicit local and network-level operational constraints. Departing from existing batch and feed-forward optimization approaches, the design of the algorithms
capitalizes on a running implementation of primal-dual projected-gradient methods; the gradient steps are, however, suitably modified to accommodate actionable feedback from the system -- hence, the term “online optimization with feedback.’’ By virtue of this approach, the resultant running algorithms can cope with model mismatches in the algebraic representation of the system states and outputs, they avoid pervasive measurements of exogenous inputs, and they naturally lend themselves to a distributed implementation. Under suitable assumptions, analytical convergence claims are presented in terms of dynamic regret. Furthermore, when the synthesis of the feedback-based online algorithms is based on a regularized Lagrangian function, Q-linear convergence to solutions of the time-varying optimization problem is shown.