Path Planning Based on Deep Reinforcement Learning for Autonomous Underwater Vehicles Under Ocean Current Disturbance

The path planning issue of the underactuated autonomous underwater vehicle (AUV) under ocean current disturbance is studied in this paper. In order to improve the AUV's path planning capability in the unknown environments, a deep reinforcement learning (DRL) path planning method based on double...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on intelligent vehicles Vol. 8; no. 1; pp. 108 - 120
Main Authors: Chu, Zhenzhong, Wang, Fulun, Lei, Tingjun, Luo, Chaomin
Format: Journal Article
Language:English
Published: Piscataway IEEE 01-01-2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The path planning issue of the underactuated autonomous underwater vehicle (AUV) under ocean current disturbance is studied in this paper. In order to improve the AUV's path planning capability in the unknown environments, a deep reinforcement learning (DRL) path planning method based on double deep Q Network (DDQN) is proposed. It is created from an improved convolutional neural network, which has two input layers to adapt to the processing of high-dimensional environments. Considering the maneuverability of underactuated AUV under current disturbance, especially, the issue of ocean current disturbance under unknown environments, a dynamic and composite reward function is developed to enable the AUV to reach the destination with obstacle avoidance. Finally, the path planning ability of the proposed method in the unknown environments is validated by simulation analysis and comparison studies.
ISSN:2379-8858
2379-8904
DOI:10.1109/TIV.2022.3153352