Improving Generalization in Aerial and Terrestrial Mobile Robots Control Through Delayed Policy Learning

Deep Reinforcement Learning (DRL) has emerged as a promising approach to enhance motion control and decision-making through a wide range of robotic applications. While prior research has demonstrated the efficacy of DRL algorithms in facilitating autonomous mapless navigation for aerial and terrestr...

Full description

Saved in:
Bibliographic Details
Published in:2024 IEEE 20th International Conference on Automation Science and Engineering (CASE) pp. 1028 - 1033
Main Authors: Grando, Ricardo B., Steinmetz, Raul, Kich, Victor A., Kolling, Alisson H., Furik, Pablo M., de Jesus, Junior C., Guterres, Bruna V., Gamarra, Daniel T., Guerra, Rodrigo S., Drews, L. J.
Format: Conference Proceeding
Language:English
Published: IEEE 28-08-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep Reinforcement Learning (DRL) has emerged as a promising approach to enhance motion control and decision-making through a wide range of robotic applications. While prior research has demonstrated the efficacy of DRL algorithms in facilitating autonomous mapless navigation for aerial and terrestrial mobile robots, these methods often grapple with poor generalization when faced with unknown tasks and environments. This paper explores the impact of the Delayed Policy Updates (DPU) technique on fostering generalization to new situations and bolstering the overall performance of agents. Our analysis of DPU for aerial and terrestrial mobile robots in four simulated environments reveals that this technique significantly curtails the lack of generalization and accelerates the learning process for agents, enhancing their efficiency across diverse tasks and unknown scenarios.
ISSN:2161-8089
DOI:10.1109/CASE59546.2024.10711561