Research
The research in my group strives to develop theories that make machine learning applicable in real-world large scale engineering systems. Our research is interdisciplinary in nature where we develop new mathematical tools in machine/reinforcement learning, control theory, optimization, network science and apply these tools to cyber physical systems, power systems, transportation systems, robotics and beyond, with provable performance and resilience guarantee.
Some of research projects are listed below.
Learn to Stabilize
Machine learning has been applied to control systems to learn to control an unknown system with provable performance guarantee (e.g. regret, competitive ratio). However, in addition to performance, an equally important property of control systems is stability, without which there is no performance to even talk about. In this project, we investigate the ``learn to stabilize’’ problem for an unknown system, and study fundamental questions like sample complexity.
- Yang Hu, Adam Wierman, Guannan Qu, On the Sample complexity of stabilizing LTI systems on a Singlle Trajectory, NeurIPS 2022 (link)
- Songyuan Zhang, Yumeng Xiu, Guannan Qu, Chuchu Fan, Compositional Neural Certificates for Networked Dynamical Systems, 5th Learning for Dynamics and Control Conference, 2023 (oral presentation).
Learning and Control for Networked Systems
Reinforcement Learning (RL) has achieved many sucess in single-agent systems, but its application to large scale networked systems face a major obstacle: scalability. Put more concretely, the scalibity issue lies in that the state or action space of such networked systems can be exponentially large in the number of nodes; further, each agent only has local observation of the state of the network. In this project, we investigate how we can use the network structure to make RL scalable for networked systems.
- Guannan Qu, Adam Wierman, Na Li, Scalable Reinforcement Learning for Multi-Agent Networked Systems, Operations Research 2021 (link)
- Guannan Qu, Yiheng Lin, Adam Wierman, Na Li, Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward, NeurIPS 2020 (link)
- Yiheng Lin, Guannan Qu, Longbo Huang, Adam Wierman, Multi-Agent Reinforcement Learning in Stochastic Networked Systems, NeurIPS 2021. (link)
- Yizhou Zhang*, Guannan Qu*, Pan Xu*, Yiheng Lin, Zaiwei Chen, Adam Wierman, Global Convergence of Localized Policy Iteration in Networked Multi-Agent Reinforcement Learning, ACM SIGMETRICS 2023. (* equal contribution)
As an application, we have also applied RL to power systems.
- Scalable RL for microgrid inverter control (link), where we demonstrated the superior scalability of the proposed scalable RL framework.
- Review paper on RL for power systems (link)
- Stable RL for voltage control (link)
Even without learning, the control of a networked systems is already a challenging problem. To that end, I have developed fundamental theories regarding how to design distributed algorithms for control and optimization of networked systems using only local information and local communication.
- Eric Xu, Guannan Qu, Stability and Regret bounds on Distributed Truncated Predictive Control for Networked Dynamical Systems, arXiv preprint arXiv:2310.06194.
- Eric Xu and Guannan Qu, Natural Policy Gradient Preserves Spatial Decay Properties for Control of Networked Dynamical Systems, IEEE Conference on Decision and Control, 2023.
- Sungho Shin, Yiheng Lin, Guannan Qu, Adam Wierman, Mihai Anitescu, Near-Optimal Distributed Linear-Quadratic Regulator for Networked Systems, accepted to SIAM Journal on Control and Optimization.
- Yiheng Lin, Judy Gan, Guannan Qu, Yash Kanoria, Adam Wierman, Decentralized Online Convex Optimization in Networked Systems, ICML 2022.
- Guannan Qu and Na Li, Accelerated Distributed Nesterov Gradient Descent, IEEE Transactions on Automatic Control, vol. 65, no. 6, pp. 2566 - 2581, June 2020.
- Guannan Qu and Na Li, Harnessing Smoothness to Accelerate Distributed Optimization, IEEE Transactions on Control of Network Systems, vol. 5, no. 3, pp. 1245-1260, Sept. 2018.
Model Predictive Control
Model Prediction Control (MPC) is one of the most popular and flexible controller design approaches, yet its performance guarantee has long been not well understood, particularly when it comes to time-varying systems and systems with constraints. In this project, we propose a general perturbation analsyis framework that bounds the regret of MPC.
Yiheng Lin, Yang Hu, Guanya Shi, Haoyuan Sun, Guannan Qu, Adam Wierman, Perturbation-based regret analysis of predictive control in linear time varying systems, NeurIPS 2021 (link)
Yiheng Lin, Yang Hu, Guannan Qu, Tongxin Li, Adam Wierman, Bounded-regret MPC via perturbation analysis: prediction error, constraints, and nonlinearity, NeurIPS 2022.
Bridging Model-based and Model-Free Methods
Traditional controller synthesis typicalls starts with a first-princeples model and designs a controller with provable stability and robustness guarantee. In contrast, recent RL approaches do not assume a model and learns a controller (often times neural network based) in a data driven manner, which experimentally can perform well even for complex dynamical systems. However, the RL approach is often data and computation heavy, requires extensive tuning, and lacks provable guarantess. In this project, we seek to combine both approaches and achieve the best of both worlds.
Guannan Qu, Chenkai Yu, Steven Low, Adam Wierman, Exploiting Linear Models for Model-Free Nonlinear Control: A Provably Convergent Policy Gradient Approach (link)
Tongxin Li, Ruixiao Yang, Guannan Qu, Guanya Shi, Chenkai Yu, Adam Wierman, Steven Low, Robustness and Consistency in Linear Quadratic Control with Predictions , ACM SIGMETRICS 2022 (link).
Tongxin Li, Ruixiao Yang, Guannan Qu, Yiheng Lin, Adam Wierman, Steven H Low, Certifying Black-Box Policies With Stability for Nonlinear Control, IEEE Open Journal of Control Systems, vol. 2, 2023.
Application: power systems
Many of our research is inspired by applications in power systems, particularly distributed control and coordination of distributed energy resources. Here is a list of relevant power system publications.
Han Xu, Jialin Zheng, Guannan Qu, A Scalable Network-Aware Multi-Agent Reinforcement Learning Framework for Decentralized Inverter-based Voltage Control, arXiv preprint arXiv:2312.04371 (2023).
Xin Chen, Guannan Qu, Yujie Tang, Steven Low, Na Li, Reinforcement Learning for Selective Key Applications in Power Systems: Recent Advances and Future Challenges , accepted to IEEE Transactions on Smart Grid. (link)
Yuanyuan Shi, Guannan Qu, Steven Low, Anima Anandkumar, Adam Wierman, Stability Constrained Reinforcement Learning for Real-Time Voltage Control , American Control Conference 2022. (* denotes equal contribution) (link)
Guannan Qu and Na Li, Optimal Distributed Feedback Voltage Control under Limited Reactive Power, IEEE Transactions on Power Systems, vol. 35, no. 1, pp. 315 - 331, January 2020.
Sindri Magnússon, Guannan Qu and Na Li, Distributed Optimal Voltage Control with Asynchronous and Delayed Communication, IEEE Transactions on Smart Grid, vol. 11, no. 4, pp. 3469 - 3482, July 2020.
Sindri Magnússon, Guannan Qu, Carlo Fischione, Na Li, Voltage Control Using Limited Communication, IEEE Transactions on Control of Network Systems, vol. 6, no. 3, pp 993-1003, Sept. 2019.
Niloy Patari, Anurag K Srivastava, Guannan Qu, Na Li, Distributed Voltage Control for Three-Phase Unbalanced Distribution Systems with DERs and Practical Constraints, accepted to IEEE Transactions on Industry Applications.
Xiaoqi Tan, Guannan Qu, Bo Sun, Na Li, and Danny H.K. Tsang, Optimal Scheduling of Battery Charging Station Serving Electric Vehicles Based on Battery Swapping, IEEE Transactions on Smart Grid, vol. 10, no. 2, pp. 1372-1384, March 2019.
Optimization Theory