Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control

The real-time control optimization of electrified vehicles is one of the most demanding tasks to be faced in the innovation progress of low-emissions mobility. Intelligent energy management systems represent interesting solutions to solve complex control problems, such as the maximization of the fue...

Full description

Saved in:
Bibliographic Details
Published in:Energies (Basel) Vol. 16; no. 6; p. 2749
Main Authors: Acquarone, Matteo, Maino, Claudio, Misul, Daniela, Spessa, Ezio, Mastropietro, Antonio, Sorrentino, Luca, Busto, Enrico
Format: Journal Article
Language:English
Published: Basel MDPI AG 01-03-2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The real-time control optimization of electrified vehicles is one of the most demanding tasks to be faced in the innovation progress of low-emissions mobility. Intelligent energy management systems represent interesting solutions to solve complex control problems, such as the maximization of the fuel economy of hybrid electric vehicles. In the recent years, reinforcement-learning-based controllers have been shown to outperform well-established real-time strategies for specific applications. Nevertheless, the effects produced by variation in the reward function have not been thoroughly analyzed and the potential of the adoption of a given RL agent under different testing conditions is still to be assessed. In the present paper, the performance of different agents, i.e., Q-learning, deep Q-Network and double deep Q-Network, are investigated considering a full hybrid electric vehicle throughout multiple driving missions and introducing two distinct reward functions. The first function aims at guaranteeing a charge-sustaining policy whilst reducing the fuel consumption (FC) as much as possible; the second function in turn aims at minimizing the fuel consumption whilst ensuring an acceptable battery state of charge (SOC) by the end of the mission. The novelty brought by the results of this paper lies in the demonstration of a non-trivial incapability of DQN and DDQN to outperform traditional Q-learning when a SOC-oriented reward is considered. On the contrary, optimal fuel consumption reductions are attained by DQN and DDQN when more complex FC-oriented minimization is deployed. Such an important outcome is particularly evident when the RL agents are trained on regulatory driving cycles and tested on unknown real-world driving missions.
ISSN:1996-1073
1996-1073
DOI:10.3390/en16062749