Anastasios Tsiamis


Loading...

Last Name

Tsiamis

First Name

Anastasios

Organisational unit

03751 - Lygeros, John / Lygeros, John

Search Results

Publications 1 - 10 of 14
  • Ziemann, Ingvar; Tsiamis, Anastasios; Lee, Bruce; et al. (2023)
    2023 62nd IEEE Conference on Decision and Control (CDC)
    This tutorial serves as an introduction to recently developed non-asymptotic methods in the theory of-mainly linear-system identification. We emphasize tools we deem particularly useful for a range of problems in this domain, such as the covering technique, the Hanson-Wright Inequality and the method of self-normalized martingales. We then employ these tools to give streamlined proofs of the performance of various least-squares based estimators for identifying the parameters in autoregressive models. We conclude by sketching out how the ideas presented herein can be extended to certain nonlinear identification problems.
  • Tsiamis, Anastasios; Abdalmoaty, Mohamed; Smith, Roy; et al. (2024)
    2024 IEEE 63rd Conference on Decision and Control (CDC)
    We study non-parametric frequency-domain system identification from a finite-sample perspective. We assume an open loop scenario where the excitation input is periodic and consider the Empirical Transfer Function Estimate (ETFE), where the goal is to estimate the frequency response at certain desired (evenly-spaced) frequencies, given input-output samples. We show that under sub-Gaussian colored noise (in time-domain) and stability assumptions, the ETFE estimates are concentrated around the true values. The error rate is of the order of $\mathcal{O}((d_{\mathrm{u}}+\sqrt{d_{\mathrm{u}}d_{\mathrm{y}}})\sqrt{M/N_{\mathrm{tot}}})$, where $N_{\mathrm{tot}}$ is the total number of samples, $M$ is the number of desired frequencies, and $d_{\mathrm{u}},\,d_{\mathrm{y}}$ are the dimensions of the input and output signals respectively. This rate remains valid for general irrational transfer functions and does not require a finite order state-space representation. By tuning $M$, we obtain a $N_{\mathrm{tot}}^{-1/3}$ finite-sample rate for learning the frequency response over all frequencies in the $ \mathcal{H}_{\infty}$ norm. Our result draws upon an extension of the Hanson-Wright inequality to semi-infinite matrices. We study the finite-sample behavior of ETFE in simulations.
  • Tsiamis, Anastasios; Kalogerias, Dionysios; Ribeiro, Alejandro; et al. (2025)
    Automatica
    We propose a new risk-constrained formulation of the Linear Quadratic (LQ) stochastic control problem for general partially-observed systems. Classical risk-neutral LQ controllers, although optimal in expectation, might be ineffective under infrequent, yet statistically significant extreme events. To effectively trade between average and extreme event performance, we introduce a new risk constraint, which restricts the cumulative expected predictive variance of the state penalty by a user-prescribed level. We show that, under certain conditions on the process noise, the optimal risk-aware controller can be evaluated explicitly and in closed form. In fact, it is affine relative to the minimum mean square error (mmse) state estimate. The affine term pushes the state away from directions where the noise exhibits heavy tails, by exploiting the third-order moment (skewness) of the noise. The linear term regulates the state more strictly in risky directions, where both the prediction error (conditional) covariance and the state penalty are simultaneously large; this is achieved by inflating the state penalty within a new filtered Riccati difference equation. We also prove that the new risk-aware controller is internally stable, regardless of parameter tuning, in the special cases of (i) fully-observed systems, and (ii) partially-observed systems with Gaussian noise. The properties of the proposed risk-aware LQ framework are lastly illustrated via indicative numerical examples.
  • Lee, Bruce D.; Ziemann, Ingvar; Tsiamis, Anastasios; et al. (2023)
    2023 62nd IEEE Conference on Decision and Control (CDC)
    We present a local minimax lower bound on the excess cost of designing a linear-quadratic controller from offline data. The bound is valid for any offline exploration policy that consists of a stabilizing controller and an energy bounded exploratory input. The derivation leverages a relaxation of the minimax estimation problem to Bayesian estimation, and an application of van Trees inequality. We show that the bound aligns with system-theoretic intuition. In particular, we demonstrate that the lower bound increases when the optimal control objective value increases. We also show that the lower bound increases when the system is poorly excitable, as characterized by the spectrum of the controllability gramian of the system mapping the noise to the state and the H-infinity norm of the system mapping the input to the state. We further show that for some classes of systems, the lower bound may be exponential in the state dimension, demonstrating exponential sample complexity for learning the linear-quadratic regulator.
  • Tsiamis, Anastasios; Pappas, George J. (2023)
    IEEE Transactions on Automatic Control
    In this article, we consider the problem of predicting observations generated online by an unknown, partially observable linear system, which is driven by Gaussian noise. In the linear Gaussian setting, the optimal predictor in the mean square error sense is the celebrated Kalman filter, which can be explicitly computed when the system model is known. When the system model is unknown, we have to learn how to predict observations online based on finite data, suffering possibly a nonzero regret with respect to the Kalman filter's prediction. We show that it is possible to achieve a regret of the order of poly (N) with high probability, where N is the number of observations collected. This is achieved using an online least-squares algorithm, which exploits the approximately linear relation between future observations and past observations. The regret analysis is based on the stability properties of the Kalman filter, recent statistical tools for finite sample analysis of system identification, and classical results for the analysis of least-squares algorithms for time series. Our regret analysis can also be applied to other predictors, e.g., multiple step-ahead prediction, or prediction under exogenous inputs including closed-loop prediction. A fundamental technical contribution is that our bounds hold even for the class of nonexplosive systems (including marginally stable systems), which was not addressed before in the case of online prediction.
  • Ziemann, Ingvar; Tsiamis, Anastasios; Sandberg, Henrik; et al. (2022)
    2022 IEEE 61st Conference on Decision and Control (CDC)
    We study stochastic policy gradient methods from the perspective of control-theoretic limitations. Our main result is that ill-conditioned linear systems in the sense of Doyle inevitably lead to noisy gradient estimates. We also give an example of a class of stable systems in which policy gradient methods suffer from the curse of dimensionality. Finally, we show how our results extend to partially observed systems.
  • Koumpis, Nikolas; Tsiamis, Anastasios; Kalogerias, Dionysios (2022)
    2022 IEEE 61st Conference on Decision and Control (CDC)
    We propose a methodology for performing risk-averse quadratic regulation of partially observed Linear Time-Invariant (LTI) systems, disturbed by process and output noise. To compensate against the induced variability due to both types of noises, state regulation is subject to two risk constraints. The latter render the resulting controller to be cautious of stochastic disturbances, by restricting the statistical variability, namely, a simplified version of the cumulative expected predictive variance, of both the state and the output. It turns out that our proposed formulation results in an optimal risk-averse policy that preserves favorable characteristics of the classical Linear Quadratic (LQ) control. In particular, the optimal policy has an affine structure with respect to the minimum mean square error (mmse) estimates. The linear component of the policy regulates the state more strictly in riskier directions, where the process and output noise covariance, cross-covariance, and the corresponding penalties are simultaneously large. This is achieved by "inflating" the state penalty in a systematic way. The additional affine terms force the state against pure and cross third-order statistics of the process and output disturbances. Another favorable characteristic of our optimal policy is that it can be pre-computed off-line, thus, avoiding limitations of prior work. Stability analysis shows that the derived controller is always internally stable regardless of parameter tuning. The functionality of the proposed risk-averse policy is illustrated through a working example via extensive numerical simulations.
  • Tsiamis, Anastasios; Ziemann, Ingvar; Matni, Nikolai; et al. (2023)
    IEEE Control Systems Magazine
    Learning algorithms have become an integral component to modern engineering solutions. Examples range from self-driving cars and recommender systems to finance and even critical infrastructure, many of which are typically under the purview of control theory. While these algorithms have already shown tremendous promise in certain applications [1], there are considerable challenges, in particular, with respect to guaranteeing safety and gauging fundamental limits of operation. Thus, as we integrate tools from machine learning into our systems, we also require an integrated theoretical understanding of how they operate in the presence of dynamic and system-theoretic phenomena. Over the past few years, intense efforts toward this goal - an integrated theoretical understanding of learning, dynamics, and control - have been made. While much work remains to be done, a relatively clear and complete picture has begun to emerge for (fully observed) linear dynamical systems. These systems already allow for reasoning about concrete failure modes, thus helping to indicate a path forward. Moreover, while simple at a glance, these systems can be challenging to analyze. Recently, a host of methods from learning theory and high-dimensional statistics, not typically in the control-theoretic toolbox, have been introduced to our community. This tutorial survey serves as an introduction to these results for learning in the context of unknown linear dynamical systems (see 'Summary'). We review the current state of the art and emphasize which tools are needed to arrive at these results. Our focus is on characterizing the sample efficiency and fundamental limits of learning algorithms. Along the way, we also delineate a number of open problems. More concretely, this article is structured as follows. We begin by revisiting recent advances in the finite-sample analysis of system identification. Next, we discuss how these finite-sample bounds can be used downstream to give guaranteed performance for learning-based offline control. The final technical section discusses the more challenging online control setting. Finally, in light of the material discussed, we outline a number of future directions.
  • Impicciatore, Anastasia; Tsiamis, Anastasios; Lun, Yuriy Zacchia; et al. (2022)
    2022 IEEE 61st Conference on Decision and Control (CDC)
    This note studies state estimation in wireless networked control systems with secrecy against eavesdropping. Specifically, a sensor transmits a system state information to the estimator over a legitimate user link, and an eavesdropper overhears these data over its link independent of the user link. Each connection may be affected by packet losses and is modeled by a finite-state Markov channel (FSMC), an abstraction widely used to design wireless communication systems. This paper presents a novel concept of optimal mean square expected secrecy over FSMCs and delineates the design of a secrecy parameter requiring the user mean square estimation error (MSE) to be bounded and eavesdropper MSE unbounded. We illustrate the developed results on an example of an inverted pendulum on a cart whose parameters are estimated remotely over a wireless link exposed to an eavesdropper.
  • Tsiamis, Anastasios; Karapetyan, Aren; Li, Yueshan; et al. (2024)
    Proceedings of Machine Learning Research ~ Proceedings of the 41st International Conference on Machine Learning
    In this paper, we study the problem of online tracking in linear control systems, where the objective is to follow a moving target. Unlike classical tracking control, the target is unknown, non-stationary, and its state is revealed sequentially, thus, fitting the framework of online non-stochastic control. We consider the case of quadratic costs and propose a new algorithm, called predictive linear online tracking (PLOT). The algorithm uses recursive least squares with exponential forgetting to learn a time-varying dynamic model of the target. The learned model is used in the optimal policy under the framework of receding horizon control. We show the dynamic regret of PLOT scales with $\mathcal{O}(\sqrt{TV_T})$, where $V_T$ is the total variation of the target dynamics and $T$ is the time horizon. Unlike prior work, our theoretical results hold for non-stationary targets. We implement PLOT on a real quadrotor and provide open-source software, thus, showcasing one of the first successful applications of online control methods on real hardware.
Publications 1 - 10 of 14