Learning the Condition of Satisfaction of an Elementary Behavior in Dynamic Field Theory
In order to proceed along an action sequence, an autonomous agent has to recognize that the intended final condition of the previous action has been achieved. In previous work, we have shown how a sequence of actions can be generated by an embodied agent using a neural-dynamic architecture for behav...
Saved in:
Published in: | Paladyn (Warsaw) Vol. 6; no. 1 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
De Gruyter Open
18-11-2015
De Gruyter |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In order to proceed along an action sequence,
an autonomous agent has to recognize that the intended
final condition of the previous action has been achieved.
In previous work, we have shown how a sequence of actions
can be generated by an embodied agent using a
neural-dynamic architecture for behavioral organization,
in which each action has an intention and condition of satisfaction.
These components are represented by dynamic
neural fields, and are coupled to motors and sensors of the
robotic agent.Here,we demonstratehowthemappings between
intended actions and their resulting conditions may
be learned, rather than pre-wired.We use reward-gated associative
learning, in which, over many instances of externally
validated goal achievement, the conditions that are
expected to result with goal achievement are learned. After
learning, the external reward is not needed to recognize
that the expected outcome has been achieved. This
method was implemented, using dynamic neural fields,
and tested on a real-world E-Puck mobile robot and a simulated
NAO humanoid robot. |
---|---|
ISSN: | 2081-4836 2081-4836 |
DOI: | 10.1515/pjbr-2015-0011 |