Browsing by Subject "Stochastic Control"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Persistency of Excitation, Nonlinear Function Approximation, and Stochastic Contraction Analysis for Learning in Model Reference Adaptive Control(2023-08) Lekang, TylerMachine learning has achieved unprecedented levels of success recently, in the areas of language processing and modeling, image and video classification and generation, and recommendation and dynamic pricing systems. The control of dynamic systems has also benefited from these advancements in learning, particularly in the areas of reinforcement learning for tasks such as robotic navigation and control of nuclear fusion processes. We wish to study learning in another area where it can naturally be applied: adaptive control systems. These systems must estimate and identify uncertainties in the plant inorder to apply their adaptive control laws. We study the areas of stochastic contraction and convex projection, persistency of excitation, and function approximation, with an eye towards this application. The first part of the thesis is motivated by the problem of quantitatively bounding the convergence of adaptive control methods for stochastic systems to a stationary distribution. Such bounds are useful for analyzing statistics of trajectories and determining appropriate step sizes for simulations. To this end, we extend a methodology from (unconstrained) stochastic differential equations (SDEs) which provides contractions in a specially chosen Wasserstein distance. This theory focuses on unconstrained SDEs with fairly restrictive assumptions on the drift terms. Typical adaptive control schemes place constraints on the learned parameters and their update rules violate the drift conditions. To this end, we extend the contraction theory to the case of constrained systems represented by reflected stochastic differential equations and generalize the allowable drifts. We show how the general theory can be used to derive quantitative contraction bounds on a nonlinear stochastic adaptive regulation problem. ivThe second part of the thesis defines geometric criteria which are then used to establish sufficient conditions for persistency of excitation with vector functions constructed from single hidden-layer neural networks with step or ReLU activation functions. We show that these conditions hold when employing reference system tracking, as is commonly done in adaptive control. We demonstrate the results numerically on a system with linearly parameterized activations of this type and show that the parameter estimates converge to the true values with the sufficient conditions met. The third part of the thesis studies function approximation. Classical results in neural network approximation theory show how arbitrary continuous functions can be approximated by networks with a single hidden layer, under mild assumptions on the activation function. However, the classical theory does not give a constructive means to generate the network parameters that achieve a desired accuracy. Recent results have demonstrated that for specialized activation functions, such as ReLUs, high accuracy can be achieved via linear combinations of randomly initialized activations. These recent works utilize specialized integral representations of target functions that depend on the specific activation functions used. This paper defines mollified integral representations, which provide a means to form integral representations of target functions using activations for which no direct integral representation is currently known. The new construction enables approximation guarantees for randomly initialized networks using any activation for which there exists an established base approximation which may not be constructive. We extend the results to the supremum norm and show how this enables application to an extended, approximate version of (linear) model reference adaptive control.