Browsing by Subject "Dynamic Games"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item First-Order Algorithms for Unconstrained/Constrained Dynamic Games and Identification of Linear Dynamic Systems with Multiplicative Noise(2021-02) Di, BoleiThe thesis consists of two parts or works. The first part focuses on numerical methods for computing the Nash equilibria of unconstrained and constrained dynamics games, and the second one studies linear system identification with multiplicative noise. We approach both topics from the angles of optimal control and optimization theory.Dynamic games arise when multiple agents with differing objectives control a dynamic system. They model a wide variety of applications in economics, defense, energy systems, etc. However, compared to single-agent control problems, the computational methods for dynamic games are relatively limited. We focus on the state space formulation of dynamic games. Extra constraints that limit the space of actions based on the current state and other players’ actions at each step, make the distinction between our unconstrained and constrained dynamic games. As in the single-agent case, only specific dynamic games can be solved exactly, so approximation algorithms are required. We focus on iterative numerical methods for finding the Nash equilibria of constrained/unconstrained full-information non-zero-sum dynamic games. We make a clear distinction between open-loop Nash equilibrium (OLNE) and feedback Nash equilibrium (FNE), which are very different in the structures. Therefore, we need different numerical methods. In the unconstrained case, we show how the stagewise Newton method, the approximated Bellman recursion (ABR), and the popular differential dynamic programming (DDP) originated from single-agent optimization or optimal control, can be adapted to the multi-player dynamic games. While the Newton step method works for OLNE, the other methods are related to FNE. We show that Newton’s step can be solved in a computationally efficient manner and inherits its original quadratic convergence rate to open-loop Nash equilibria and that the ABR and DDP methods are very similar and can be used to find local feedback O(ε2)-Nash equilibria. For the constrained case, we show how to extend the projected gradient and Douglas- Rachford (DR) splitting methods that originated from constrained optimization and variational inequalities to solve the OLNEs. The resulting algorithms converge locally to open-loop Nash equilibria (OLNE) at linear rates. Furthermore, we extend the approximated Bellman recursion we proposed for the unconstrained dynamic games method to find local FNE for constrained dynamic games. In the case of linear dynamics and polyhedral constraints, we show that this local feedback policy is a local approximated feedback Nash equilibrium (FNE). All of our methods exploit the temporal structure of a dynamic system and therefore have a linear complexity with respect to the number of stages, which is a major improvement on the previously existing methods. Linear systems with multiplicative noise (LSMN) have been approached from robust control, filtering and optimal control, system identification, and reinforcement learning since LSMN’s early emergence in the 1960s. We focus on the improvement of system identification methods for LSMN by offering an algorithm based on least-squares estimation, which estimates both the first and second moments of the system parameters, and offers a probability bound on the estimates. Our proposed method can be applied to a more general problem formulation than the existing ones. We further develop an online scheme for estimation and a robust control scheme based on the estimation and bound. Overall, the thesis is focusing on proposing novel numerical algorithms for control/game problems that have not been satisfactorily solved. The effectiveness of the proposed algorithms in terms of their scope, convergence, and complexity is analyzed. Inspirations of these new numerical algorithms typically came from existing methods for optimization, optimal control, and variational inequality problems. For each topic we focus on, the problem formulation, proposed algorithms, and analysis of the algorithms are the three major components.Item Forward and Inverse Methods in Optimal Control and Dynamic Game Theory(2019-08) Awasthi, ChaitanyaOptimal control theory is ubiquitous in mathematical sciences and engineering. However, in a classroom setting we barely move beyond linear quadratic regulator problems, if at all. In this work, we demystify the necessary conditions of optimality associated with nonlinear optimal control by deriving them from first principles. We also present two numerical schemes for solving these problems. Moving forward, we present an extension of inverse optimal control, which is the problem of computing a cost function with respect to which observed state and control trajectories are optimal. This extension helps us to handle systems which are subjected to state and/or control constraints. We then generalize the methodology of optimal control theory to solve constrained non-zero sum dynamic games. Dynamic games are optimization problems involving several players who are trying to optimize their respective cost functions subject to constraints. We present a novel method to compute Nash equilibrium associated with a game by combining aspects from direct and indirect methods of solving optimal control problems. Finally, we study constrained inverse dynamic games, which is a problem analogous to constrained inverse optimal control method. Here, we show that an inverse dynamic game problem can be decoupled and solved as an inverse optimal control problem for each of the players individually. Throughout the work, examples are provided to demonstrate efficacy of the methods developed.