Reinforcement Learning with History Lists

Reinforcement Learning with History Lists

Solving Partially Observable Decision Processes by Using Short Term Memory

Suedwestdeutscher Verlag fuer Hochschulschriften ( 21.04.2009 )

€ 69,90

Im MoreBooks! Shop bestellen

A very general framework for modeling uncertainty in learning environments is given by Partially observable Markov Decision Processes (POMDPs). In a POMDP setting, the learning agent infers a policy for acting optimally in all possible states of the environment, while receiving only observations of these states. The basic idea for coping with partial observability is to include memory into the representation of the policy. Perfect memory is provided by the belief space, i.e. the space of probability distributions over environmental states. However, computing policies defined on the belief space requires a considerable amount of prior knowledge about the learning problem and is expensive in terms of computation time.The author Stephan Timmer presents a reinforcement learning algorithm for solving POMDPs based on short term memory. In contrast to belief states, short term memory is not capable of representing optimal policies, but is far more practical and requires no prior knowledge about the learning problem. It can be shown that the algorithm can also be used to solve large Markov Decision Processes (MDPs) with continuous, multi-dimensional state spaces.

Buch Details:

ISBN-13:

978-3-8381-0621-2

ISBN-10:

3838106210

EAN:

9783838106212

Buchsprache:

Deutsch

By (author) :

Stephan Timmer

Seitenanzahl:

160

Veröffentlicht am:

21.04.2009

Kategorie:

Informatics, IT