Between Dec 19, 2024 and Jan 2, 2025, datasets can be submitted to DRUM but will not be processed until after the break. Staff will not be available to answer email during this period, and will not be able to provide DOIs until after Jan 2. If you are in need of a DOI during this period, consider Dryad or OpenICPSR. Submission responses to the UDC may also be delayed during this time.
 

Efficient learning in linearly solvable MDP models.

2012-06
Loading...
Thumbnail Image

Persistent link to this item

Statistics
View Statistics

Journal Title

Journal ISSN

Volume Title

Title

Efficient learning in linearly solvable MDP models.

Authors

Published Date

2012-06

Publisher

Type

Thesis or Dissertation

Abstract

Linearly solvable Markov Decision Process (MDP) models are a powerful subclass of problems with a simple structure that allow the policy to be written directly in terms of the uncontrolled (passive) dynamics of the environment and the goals of the agent. However, there have been no learning algorithms for this class of models. In this research, inspired by Todorov’s way of computing optimal action, we showed how to construct passive dynamics from any transition matrix, use Bayesian updating to estimate the model parameters and apply approximate and efficient Bayesian exploration to speed learning. In addition, the computational cost of learning was reduced using intermittent Bayesian updating reducing the frequency of solving the Bellman equation (either the normal form or Todorov’s form). We also gave a polynomial theoretical time complexity bound for the convergence of the learning process of our new algorithm, and applied this directly to a linear time bound for the subclass of the reinforcement learning (RL) problem via MDP models with the property that the transition error depends only on the agent itself. Test results for our algorithm in a grid world were presented, comparing our algorithm with the BEB algorithm. The results showed that our algorithm learned more than the BEB algorithm without losing convergence speed, so that the advantage of our algorithm increased as the environment got more complex. We also showed that our algorithm’s performance is more stable after convergence.

Description

University of Minnesota M.S. thesis. June 2012. Major: Computer science. Advisor: Prof. Paul Schrater. 1 computer file (PDF); vi, 38 pages, appendix p. 37.

Related to

Replaces

License

Series/Report Number

Funding information

Isbn identifier

Doi identifier

Previously Published Citation

Other identifiers

Suggested citation

Li, Ang. (2012). Efficient learning in linearly solvable MDP models.. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/132378.

Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.