Learning for Optimization & Control IV – Invited Special Session
A5L-F: Learning for Optimization & Control IV - Invited Special Session
Session Type: LectureSession Code: A5L-F
Location: Room 6
Date & Time: Wednesday March 22, 2023 (15:20-16:20)
Chair: Mahyar Fazlyab, Enrique Mallada
Track: 12
Paper ID | Paper Name | Authors | Abstract |
---|---|---|---|
3107 | Towards Optimal Sample Complexity for Learning Partially Observed Linear Dynamical Systems | Holden Lee | Identification of a linear time-invariant dynamical system from partial observations is a fundamental problem in control theory. In this talk, I\'ll discuss the following questions: what are the optimal non-asymptotic rates at which we can learn such systems using efficient algorithms? Are they achieved by classical subspace identification algorithms, or can we design more sample-efficient algorithms informed by statistical learning theory? In particular, a big challenge is to obtain rates depending on the inherent dimensionality (order) d of the system, rather than other quantities such as the memory length of the system. We make progress towards this question by giving a new algorithm, based on a multi-scale low-rank approximation of Hankel matrices, which learns the system in H_2 error with near-optimal rates under observation noise. Based on https://arxiv.org/abs/2011.10006. |
3137 | Federated Learning for System Identification and Control | Han Wang, Leonardo Toso, James Anderson | We study the problem of learning a linear system model from the observations of M clients. The catch: Each client is observing data from a different dynamical system. This work addresses the question of how multiple clients collaboratively learn dynamical models in the presence of heterogeneity. We pose this problem as a federated learning problem and characterize the tension between achievable performance and system heterogeneity. Furthermore, our federated sample complexity result provides a constant factor improvement over the single agent setting. We describe a meta federated learning algorithm, FedSysID, that leverages existing federated algorithms at the client level. Finally, joint and federated learning approaches to LQR synthesis via policy gradient methods will be described. |
3169 | On the Sample Complexity of Stabilizing LTI Systems on a Single Trajectory | Yang Hu{3}, Adam Wierman{1}, Guannan Qu{2} | Stabilizing an unknown dynamical system is one of the central problems in control theory. In this paper, we study the sample complexity of the learn-to-stabilize problem in Linear Time-Invariant (LTI) systems on a single trajectory. Current state-of-the-art approaches require a sample complexity linear in $n$, the state dimension, which incurs a state norm that blows up exponentially in $n$. We propose a novel algorithm based on spectral decomposition that only needs to learn a small part of the dynamical matrix acting on its unstable subspace. We show that, under proper assumptions, our algorithm stabilizes an LTI system on a single trajectory with $\\tilde{O}(k)$ samples, where $k$ is the instability index of the system. This represents the first sub-linear sample complexity result for the stabilization of LTI systems under the regime when $k=o(n)$ |