Optimizing Large Gravitational-Wave Classifier Through a Custom Cross-System Mirrored Strategy Approach
The recent detections of gravitational waves from merging binary black holes have opened the doors for a new era of multi-messenger astrophysics. Sensitive gravitational wave detectors such as the "Laser Interferometer Gravitational-wave Observatory" (LIGO) are able to observe these signal...
Saved in:
Published in: | 2022 IEEE International Conference on Data Science and Information System (ICDSIS) pp. 1 - 7 |
---|---|
Main Authors: | , , , , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
29-07-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The recent detections of gravitational waves from merging binary black holes have opened the doors for a new era of multi-messenger astrophysics. Sensitive gravitational wave detectors such as the "Laser Interferometer Gravitational-wave Observatory" (LIGO) are able to observe these signals, therefore confirming the general theory of relativity by Einstein. However, detecting faint gravitational waves (GW) signals remains a major challenge, and quickly training a good model with an enormous training data set is an even bigger challenge. To overcome these challenges, we have proposed a system that uses a cross-system mirrored strategy (distributed learning) to train the model in minimal time. To detect the faintest of the signals, we used 2D CNNs where we converted the 1D time-series data to a 2D spectrum using Fourier Transforms. This was done to extract the maximum possible features. By using distributed learning, we were able to concurrently train local models on different devices and got the final local weights. Then we aggregate all these local weights in a single system and get the final solitary global model. By using this technique of training the model, we were not only able to comfortably manage very large datasets (100s of GBs) but we were also able to finish the model training 4.5 times faster than all the prior state-of-the-art models. |
---|---|
DOI: | 10.1109/ICDSIS55133.2022.9915926 |