Header menu link for other important links
A fuzzy decision tree-based robust Markov game controller for robot manipulators
H. Shah,
Published in Inderscience Publishers
Volume: 4
Issue: 4
Pages: 417 - 439
Two-player zero-sum Markov game framework offers an effective platform for designing robust controllers. In the Markov game-based learning, theoretical convergence of the learning process with the function approximator cannot be guaranteed. However, fusing Q-learning with decision tree (DT) function approximator has shown good learning performance and more reliable convergence. It scales better to larger input spaces with lower memory requirements, and can solve problems that are infeasible using table lookup. This motivates us to introduce DT function approximator in Markov game reinforcement learning (RL) framework. This approach works, though it deals with only discrete actions. In realistic applications, it is imperative to deal with continuous state-action spaces. In this paper, we propose Markov game framework for continuous state-action space systems using fuzzy DT as a function approximator. Simulation experiments on a two-link robot manipulator bring out the importance of the proposed structure in terms of better robust performance and computational efficiency. Copyright © 2010 Inderscience Enterprises Ltd.
About the journal
Published in Inderscience Publishers
Open Access
Impact factor