S. Kamdem, N. Sueda, and H. Ohki (Japan)
coarse coding, linear gradient descent, sarsa, reinforcement learning.
This paper presents a method based on coarse coding to approximate the value function of a reinforcement learning problem over a continuous domain. The approach starts from a blank states space and gradually populates it with states features to build the agent’s knowledge. The critical portions of the domain are autonomously discovered and the resolution is adaptively increased to refine the optimal policy. Experiments conducted in two benchmark domains show that the speed of learning of this method is competitive with the most efficient representations under the widely adopted function approximation method of tile coding.
Important Links:
Go Back