Inferring dependencies in Embodiment-based modular reinforcement learning

Jacob, D., Polani, D. and Nehaniv, C.L. (2005) Inferring dependencies in Embodiment-based modular reinforcement learning. TAROS, 2005. pp. 103-110.
Copy

The state-spaces needed to describe realistic--physical embodied agents are extremely large, which presents a serious challenge to classical einforcement learning schemes. In previous work--(Jacob et al., 2005a, Jacob et al., 2005b) we introduced--our EMBER (for EMbodiment-Based modulaR) reinforcement learning system, which describes a novel method for decomposing agents into modules based on the agent s embodiment. This modular decomposition factorises the statespace--and dramatically improves performance--in unknown and dynamic environments. However,--while there are great advantages to be gained from a factorised state-space, the question of dependencies cannot be ignored. We present a development of the work reported in (Jacob et al., 2004) which shows, in a simple example, how dependencies may be identified using a heuristic approach. Results show that the--system is able quickly to discover and act upon--dependencies, even where they are neither simple--nor deterministic.

picture_as_pdf

picture_as_pdf
901935.pdf

View Download

Atom BibTeX OpenURL ContextObject in Span OpenURL ContextObject Dublin Core MPEG-21 DIDL EndNote HTML Citation METS MODS RIOXX2 XML Reference Manager Refer ASCII Citation
Export

Downloads