Browsing by Subject "Inverse Dynamic Games"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Forward and Inverse Methods in Optimal Control and Dynamic Game Theory(2019-08) Awasthi, ChaitanyaOptimal control theory is ubiquitous in mathematical sciences and engineering. However, in a classroom setting we barely move beyond linear quadratic regulator problems, if at all. In this work, we demystify the necessary conditions of optimality associated with nonlinear optimal control by deriving them from first principles. We also present two numerical schemes for solving these problems. Moving forward, we present an extension of inverse optimal control, which is the problem of computing a cost function with respect to which observed state and control trajectories are optimal. This extension helps us to handle systems which are subjected to state and/or control constraints. We then generalize the methodology of optimal control theory to solve constrained non-zero sum dynamic games. Dynamic games are optimization problems involving several players who are trying to optimize their respective cost functions subject to constraints. We present a novel method to compute Nash equilibrium associated with a game by combining aspects from direct and indirect methods of solving optimal control problems. Finally, we study constrained inverse dynamic games, which is a problem analogous to constrained inverse optimal control method. Here, we show that an inverse dynamic game problem can be decoupled and solved as an inverse optimal control problem for each of the players individually. Throughout the work, examples are provided to demonstrate efficacy of the methods developed.