Li, Ang2012-08-282012-08-282012-06https://hdl.handle.net/11299/132378University of Minnesota M.S. thesis. June 2012. Major: Computer science. Advisor: Prof. Paul Schrater. 1 computer file (PDF); vi, 38 pages, appendix p. 37.Linearly solvable Markov Decision Process (MDP) models are a powerful subclass of problems with a simple structure that allow the policy to be written directly in terms of the uncontrolled (passive) dynamics of the environment and the goals of the agent. However, there have been no learning algorithms for this class of models. In this research, inspired by Todorov’s way of computing optimal action, we showed how to construct passive dynamics from any transition matrix, use Bayesian updating to estimate the model parameters and apply approximate and efficient Bayesian exploration to speed learning. In addition, the computational cost of learning was reduced using intermittent Bayesian updating reducing the frequency of solving the Bellman equation (either the normal form or Todorov’s form). We also gave a polynomial theoretical time complexity bound for the convergence of the learning process of our new algorithm, and applied this directly to a linear time bound for the subclass of the reinforcement learning (RL) problem via MDP models with the property that the transition error depends only on the agent itself. Test results for our algorithm in a grid world were presented, comparing our algorithm with the BEB algorithm. The results showed that our algorithm learned more than the BEB algorithm without losing convergence speed, so that the advantage of our algorithm increased as the environment got more complex. We also showed that our algorithm’s performance is more stable after convergence.en-USComputer scienceEfficient learning in linearly solvable MDP models.Thesis or Dissertation