Awasthi, Chaitanya2019-12-112019-12-112019-08https://hdl.handle.net/11299/208947University of Minnesota M.S.M.E. thesis. August 2019. Major: Mechanical Engineering. Advisors: Andrew Lamperski, Rajesh Rajamani. 1 computer file (PDF); x, 79 pages.Optimal control theory is ubiquitous in mathematical sciences and engineering. However, in a classroom setting we barely move beyond linear quadratic regulator problems, if at all. In this work, we demystify the necessary conditions of optimality associated with nonlinear optimal control by deriving them from first principles. We also present two numerical schemes for solving these problems. Moving forward, we present an extension of inverse optimal control, which is the problem of computing a cost function with respect to which observed state and control trajectories are optimal. This extension helps us to handle systems which are subjected to state and/or control constraints. We then generalize the methodology of optimal control theory to solve constrained non-zero sum dynamic games. Dynamic games are optimization problems involving several players who are trying to optimize their respective cost functions subject to constraints. We present a novel method to compute Nash equilibrium associated with a game by combining aspects from direct and indirect methods of solving optimal control problems. Finally, we study constrained inverse dynamic games, which is a problem analogous to constrained inverse optimal control method. Here, we show that an inverse dynamic game problem can be decoupled and solved as an inverse optimal control problem for each of the players individually. Throughout the work, examples are provided to demonstrate efficacy of the methods developed.enConstrained OptimizationDynamic GamesInverse Dynamic GamesInverse Optimal ControlNumerical OptimizationOptimal controlForward and Inverse Methods in Optimal Control and Dynamic Game TheoryThesis or Dissertation