The performance of a Reinforcement Learning (RL) agent depends on the accuracy of the approximated state value functions. Tile coding (Sutton and Barto, 1998), a function approximator method, generalizes the approximated state value functions for the entire state space using a set of tile features (discrete features based on continuous features). The shape and size of the tiles in this method are decided manually. In this work, we propose various adaptive tile coding methods to automate the decision of the shape and size of the tiles. The proposed adaptive tile coding methods use a random tile generator, the number of states represented by features, the frequencies of observed features, and the difference between the deviations of predicted value functions from Monte Carlo estimates to select the split points. The RL agents developed using these methods are evaluated in three different RL environments: the puddle world problem, the mountain car problem and the cart pole balance problem. The results obtained are used to evaluate the efficiencies of the proposed adaptive tile coding methods.
University of Minnesota M.S. thesis. March 2012. Major: Computer science. Advisor: Dr. Richard Maclin. 1 computer file (PDF); viii, 109 pages.
Adaptive tile coding methods for the generalization of value functions in the RL state space..
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.