A Deep Reinforcement Learning Based Motion Cueing Algorithm for Vehicle Driving Simulation

Motion cueing algorithms (MCA) are used to control the movement of motion simulation platforms (MSP) to reproduce the motion perception of a real vehicle driver as accurately as possible without exceeding the limits of the workspace of the MSP. Existing approaches either produce non-optimal results...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on vehicular technology Vol. 73; no. 7; pp. 9696 - 9705
Main Authors: Scheidel, Hendrik, Asadi, Houshyar, Bellmann, Tobias, Seefried, Andreas, Mohamed, Shady, Nahavandi, Saeid
Format: Journal Article
Language:English
Published: New York IEEE 01-07-2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Motion cueing algorithms (MCA) are used to control the movement of motion simulation platforms (MSP) to reproduce the motion perception of a real vehicle driver as accurately as possible without exceeding the limits of the workspace of the MSP. Existing approaches either produce non-optimal results due to filtering, linearization, or simplifications, or the computational time required exceeds the real-time requirements of a closed-loop application. This work presents a new solution to the motion cueing problem, where instead of a human designer specifying the principles of the MCA, an artificial intelligence (AI) learns the optimal motion by trial and error in interaction with the MSP. To achieve this, a well-established deep reinforcement learning (RL) algorithm is applied, where an agent interacts with an environment, allowing him to directly control a simulated MSP to obtain feedback on its performance. The RL algorithm used is proximal policy optimization (PPO), where the value function and the policy corresponding to the control strategy are both learned and mapped in artificial neural networks (ANN). This approach is implemented in Python and the functionality is demonstrated by the practical example of pre-recorded lateral maneuvers. The subsequent validation shows that the RL algorithm is able to learn the control strategy and improve the quality of the immersion compared to an established method. Thereby, the perceived motion signals determined by a model of the vestibular system are more accurately reproduced, and the resources of the MSP are used more economically.
ISSN:0018-9545
1939-9359
DOI:10.1109/TVT.2024.3375941