My general research interests are centered around optimization, control, and learning in cyberphysical and network systems. My research combines tools from optimization, control theory, dynamics, statistical learning, and network science to develop foundational theories and algorithms that enable the deployment of efficient, safe, and autonomous decisionmaking systems. My current theoretical and algorithmic endeavors focus on:
 Optimization algorithms as feedback controllers
 Datadriven optimization and control of physical and dynamical systems
 Online optimization and learning.
A key feature of my research portfolio is its interdisciplinary character and its strong ties between foundational theory, computational aspects, and realworld applications. Theory and algorithms are primarily motivated by applications in:
 power systems and the smart grid
 electrified transportation.
Additional applications are in:
 Healthcare.
Optimization algorithms as feedback controllers
Optimization algorithms are traditionally seen as numerical methods to solve a mathematical optimization problem. My research has taken a departure from this classical view and explored how principled optimization algorithms can be converted into feedback controllers that dynamically regulate inputs and outputs of a physical or dynamical system. The synthesis of these closedloop constitutes a departure from classical control approaches. It opens opportunities for the synthesis of new classes of feedback controllers that are designed to steer the inputs and outputs of a system toward optimal solutions of a wellposed optimization problem. One key distinguishing feature of my research is that we pose the problem of tracking solution trajectories of timevarying optimization problems: that is, scenarios where cost and constraints may change over time to reflect dynamic performance and safety objectives, or take into account timevarying unknown disturbances entering the system. This setting emerges in engineering applications such as power systems, transportation, and robotics; it also leads to new optimization and control approaches in power RF engineering, epidemics, and neuroscience.
The algorithmic synthesis leverages principled firstorder optimization methods (e.g., gradient method, projected gradient, accelerated methods, saddlepoint method, etc), suitably modified and implemented in closed loop with the physical (dynamical) system. While the idea is intuitive and simple, key technical questions pertain to the closedloop stability, transient behavior, tracking performance, as well as the enforcement of physical and operational constraints in closedloop implementations.
Datadriven optimization and control of physical and dynamical systems
One of the goals of my research is to bring together optimization, control theory, and learning. Towards this end, I have been working on dataenabled optimization and control architectures where closedloop systems are augmented with supervised learning methods that supply sensing information or acquire information on the performance objectives and the plant. We consider architectures when: (i) the cost of the optimization problem associated with the system is unknown, and must be estimated from data; (ii) the steadystate map of the plant is unknown; and, (iii) the output of the system is not directly observable. The latter setup is aligned with the emerging concept of perceptionbased control; in fact, one of our concepts is perceptionbased optimization of dynamical systems, where perception maps are utilized to regulate the dynamical system to the solution of an optimization problem. This setting finds ample applications in autonomous driving and robotics; it also leads to new control paradigms in power systems and transportation networks, where the state of the system cannot be pervasively measured.
Online optimization and learning
Optimization algorithms have been predominantly implemented and studied in a “batch” implementation, where the algorithmic steps are executed until a given convergence criterion is met. We have considered setups where cost and constraints of a problem may evolve over time, and only one iteration (or a few iterations) of an algorithm can be performed before cost and constraints change. This leads to an ``online'' setting. This setting is relevant across many engineering applications where batch optimization techniques cannot produce solutions at time scales that match the interarrival times of the data points due to computational and/or communication bottlenecks. My research efforts have been centered around the foundational theory, synthesis of algorithms, and computational aspects for online optimization and learning. We consider various algorithms, including online stochastic methods.
Application: Power systems
We have been working towards leveraging and expanding our theoretical developments to engineer solutions that can overcome current technological barriers associated with the largescale integration of Distributed Energy Resources (DERs) in power transmission and distribution systems. In particular, we are interested in establishing systemtheoretic foundations for realtime control and optimization to unify frequency control, voltage control, and economic optimization under a unified framework. We seek new methods for voltage regulation and economic optimization of distribution systems, including solutions that enable aggregations of DERs to emulate a virtual power plant, state estimation, demand response, and building control. We are also interested in optimization and control approaches for electrified transportation systems and coupled powertransportation systems (where coupling emerges from electric vehicle charging).
Application: Epidemic control
Motivated by the recent public health challenges related to the SARSCoV2 pandemic, my group started a collaboration with the University of Colorado Anschutz medical campus to apply our optimization and control to the problem of controlling the spread of emerging pathogens. We seek the development of frameworks to identify system parameters and compute optimal levels of interventions regions in order to guarantee that hospitalizations will not exceed a given risk tolerance.
Application: Autonomous RF circuit design
In collaboration with Prof. Taylor W. Barton, we are exploring the development of optimizationbased controllers for autonomous RF circuit design.
Current projects
Closedloop Optimization and Control of Physical Networks Subject to Dynamic Costs, Constraints, and Disturbances
 Funded by the National Science Foundation, CMMI DCDS program.
 Principal Investigator. CoPI: Jorge Cortes (University of California San Diego)
 Period of performance: January 2021  December 2023.
Controltheoretic design of datadriven policies for containing transmission of infectious diseases
 Funded by the AB Nexus seed grant.
 Principal Investigator. CoPIs: Andrea Buchwald (University of Colorado Anschutz), Jorge I. Poveda (University of Colorado Boulder)
 Period of performance: December 2020  December 2021.
NSF CAREER: Synthesis of Feedbackbased Online Algorithms for Power Grids
 Funded by the National Science Foundation, Energy, Power, Control, and Networks (EPCN) program.
 Principal Investigator
 Period of performance: February 2020  January 2025.
NSF ERC: Advancing Sustainability through Powered Infrastructure for Roadway Electrification

NSF Engineering Research Center. Lead: Utah State University; team members: University of Colorado Boulder, Purdue University, University of Texas El Paso

Principal Investigator for University of Colorado Boulder: Qin Lv. CoPIs: Dragan Maksimovic, Emiliano Dall'Anese, BriMathias Hodge, Jana Milford, Jacquelyn Sullivan.

Period of performance: February 2020  January 2025.
NSF AMPS: Online and Modelfree Optimization of Power and Energy Systems

Funded by the National Science Foundation, Division of Mathematical Sciences (DMS), Algorithms for Modern Power Systems (AMPS) program.

Principal Investigator: Stephen Becker (University of Colorado Boulder). CoPI: Emiliano Dall'Anese (University of Colorado Boulder)

Period of performance: August 2019  July 2022.
Multiobjective Deep Reinforcement Learning for GridInteractive EnergyEfficient Buildings.
 Funded by the U.S. Department of Energy (DOE), Buildings Technology Office
 Principal Investigator: Andrey Bernstein (NREL). CoPIs: Emiliano Dall'Anese, Gregor Henze (University of Colorado Boulder)
 Period of performance: July 2019  June 2022.
Past projects
Synthesis of Realtime Optimization Algorithms for Autonomous Urban Mobility

Funded by the National Renewable Energy Laboratory

Principal Investigator

Period of performance: April 2020  September 2020.
Design and Analysis of Online Algorithms for Nextgeneration Energy Systems
 Funded by the National Renewable Energy Laboratory
 Principal Investigator
 Period of performance: September 2018  December 2019.
Research Support for Autonomous Energy Systems Program
 Funded by the National Renewable Energy Laboratory
 Principal Investigator
 Period of performance: September 2018  August 2020.
Learning to Control SafetyCritical Systems: Providing Formal Correctness Guarantees for Learningbased Control of Safetycritical Systems.
 Funded by the Research & Innovation Office of the University of Colorado Boulder.
 Principal Investigator: Ashutosh Trivedi (University of Colorado Boulder). CoPIs: Emiliano Dall'Anese, Fabio Somenzi (University of Colorado Boulder)
 Period of performance: August 2019  July 2020.
Realtime optimization and control of nextgeneration distribution infrastructure
 U.S. Department of Energy (DOE), Advance Research Project Agency  Energy (ARPAe), Network Optimized Distributed Energy Systems (NODES) program.
 Principal Investigator. CoPIs: Steven Low (Caltech), Na Li (Harvard University), Sairaj Dhople (University of Minnesota), and Christopher Clarke (Southern California Edison).
 Period of performance: July 2016  July 2019.