Learning for Optimization & Control I – Invited Special Session

Session Type: Lecture
Session Code: A1L-F
Location: Room 6
Date & Time: Wednesday March 22, 2023 (09:00 - 10:00)
Chair: Enrique Mallada, Mahyar Fazlyab
Track: 12

Paper IDPaper NameAuthorsAbstract
3016Learning for Robust Optimization Irina Wang, Cole Becker, Bart Van Parys, Bartolomeo StellatoWe propose a data-driven method to automatically learn the uncertainty sets in decision-making problems affected by uncertain data. Our technique relies on differentiating the solution of robust optimization problems with respect to the parameters defining the uncertainty sets. By applying gradient-based techniques, we resize and reshape the uncertainty sets in order to conform to the underlying distribution while minimizing a specified loss function. Our approach is very flexible and it can learn a wide variety of uncertainty sets while preserving tractability. We also derive generalization guarantees for unseen uncertainty realizations, by exploiting the structure of the underlying robust optimization problem. We implemented our work in LRO, a software package that allows the user to naturally express decision-making problems where objective and parameters are uncertain and to automatically learn the corresponding robust optimization formulations from data. Numerical experiments in portfolio optimization, optimal control, and inventory management show that our method outperforms traditional approaches in robust optimization in terms of out of sample performance while preserving the same constraint satisfaction guarantees.
3074Safe Online Convex Optimization with Time-Varying Constraints: Algorithms with Bounded Dynamic Regret, and Applications to Safe Control Hongyu Zhou, Vasileios TzoumasWe introduce the problem of Safe Online Convex Optimization with Time-Varying Constraints (Safe-OCO), providing the first algorithm with bounded dynamic regret, i.e., with bounded suboptimality against any sequence of safe time-varying decisions. Safe-OCO is a sequential game between an optimizer and an adversary: At each step t = 1,...,T, first the optimizer chooses a decision x_t such that x_t \in X_t, where X_t is a known safe set that is convex; then, the adversary reveals a convex loss function ft such that the optimizer suffers the loss f_t(x_t). The optimizer aims to minimize its cumulative loss, despite knowing f_t only after x_t has been chosen, and despite X_{t−1} being possibly disjoint from X_t. We provide the first algorithm for Safe-OCO by generalizing the seminal Online Gradient Descent algorithm to the Safe-OCO setting. We apply our algorithm to enable for the first time the safe online control of linear time-varying systems in the presence of unpredictable process noise. We thus develop the first online controller with safety guarantees and with bounded dynamic regret against any safe optimal time-varying linear feedback control policy.
3161An Accelerated Asynchronous Distributed Method for Convex Constrained Optimization Problems Nazanin Abolfazli, Erfan Yazdandoost Hamedani, Afrooz JalilzadehWe consider a class of multi-agent cooperative consensus optimization problems with local nonlinear convex constraints where only those agents connected by an edge can directly communicate, hence, the optimal consensus decision should lie in the intersection of these private sets. We develop an asynchronous distributed accelerated primal-dual algorithm to solve the considered problem. The proposed scheme is the first asynchronous method with an optimal convergence guarantee for this class of problems, to the best of our knowledge. In particular, we provide an optimal convergence rate of O(1/K) for suboptimality, infeasibility, and consensus violation.