AI-enabled and multimodal data driven smart health monitoring of wind power systems: A case study
The development of AI has enabled the fault detection of industrial components to be achieved through the combination with deep learning. A detection method combined with deep learning has also emerged for the fault detection of fan blades, such as models based on neural networks using the appearanc...
Saved in:
Published in: | Advanced engineering informatics Vol. 56; p. 102018 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier Ltd
01-04-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The development of AI has enabled the fault detection of industrial components to be achieved through the combination with deep learning. A detection method combined with deep learning has also emerged for the fault detection of fan blades, such as models based on neural networks using the appearance or sound of the blades. However, the detection model obtained from a single data type often has limitations, such as low accuracy and overfitting. This is also the problem with fan blade detection. In contrast, multimodal data fusion detection models are often more stable. The modality diversity of blade diagnosis is strong, and it can be achieved from multiple modalities such as image, sound, and vibration. To improve the accuracy of fault diagnosis of fan blades, this article proposes a multimodal double-layer detection system (MTDS) based on decision-level and feature-level fusion. The system includes a wind turbine simulation platform and a multimodal detection system. It mainly obtains different modal data of the simulated wind turbine from the image, sound, and vibration signals, including blade images through unmanned aerial vehicle photography, blade vibration signals through electronic vibrators, and blade sound signals through microphones. The highly correlated sound and vibration modal data are fused at the feature level, and a detection model based on the sound and vibration mixed mode is implemented using a sound-vibration-CNN (SV-CNN) proposed in this case. Then, a detection model of the image mode is trained based on the blade image using a Convolution Block Attention Module ResNet (CBAM-ResNet) network. Finally, the detection input of the two modal models is fed into a perceptron to obtain the final prediction result, and the decision-level fusion is implemented to achieve fan blade detection based on multimodal, namely the implementation of MTDS. |
---|---|
ISSN: | 1474-0346 1873-5320 |
DOI: | 10.1016/j.aei.2023.102018 |