Impact of Activation Functions in Deep Learning Based State of Charge Estimation for Batteries

Deep learning (DL) models are becoming popular for estimating State of Charge (SOC) in batteries. These models are good at finding complex patterns in data. This means that there is no need to fully understand the physics of how batteries work to use them, making DL models easier to implement than o...

Full description

Saved in:
Bibliographic Details
Published in:2024 IEEE 4th International Conference on Sustainable Energy and Future Electric Transportation (SEFET) pp. 1 - 6
Main Authors: Kumar, Pradeesh Prem, Satheesh, Rahul, Alhelou, Hassan Haes
Format: Conference Proceeding
Language:English
Published: IEEE 31-07-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning (DL) models are becoming popular for estimating State of Charge (SOC) in batteries. These models are good at finding complex patterns in data. This means that there is no need to fully understand the physics of how batteries work to use them, making DL models easier to implement than other methods. Within DL, activation functions are pivotal as they introduce non-linearity, enabling the capture of complex data relationships. This study systematically examines the impact of various activation functions on the performance of a DL model, specifically Deep LSTM. Notable differences in model performance based on activation function choice are revealed. Mean Absolute Errors (MAE) are reported as 1.91 %, 1.99%, and 2.03% for models trained with SELU, Leaky ReLU, and Tanh activations, respectively. The SELU - trained model achieves the highest accuracy. However, the Tanh model significantly outperforms others in computational efficiency per step, particularly in GPU-enabled environments. It requires only 4ms/step, approximately 71 % faster than its nearest counterpart. This efficiency is critical for periodic model training to accommodate battery aging effects on SOC predictions over time, as reduced training time leads to lower computational and deployment costs.
DOI:10.1109/SEFET61574.2024.10718061