Development of Trustable Deep Learning Model in Remote Sensing through Explainable-AI Method Selection
Remote sensing-based sensors are considered a pivotal medium to monitor remote objects by security-critical government organizations, e.g., DOD, DHS, and EPA. Historically, human operators used manual mapping in monitoring or detecting an object using remote sensing sensors, e.g., multi-spectral ima...
Saved in:
Published in: | 2023 IEEE 14th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON) pp. 0261 - 0267 |
---|---|
Main Authors: | , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
12-10-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Remote sensing-based sensors are considered a pivotal medium to monitor remote objects by security-critical government organizations, e.g., DOD, DHS, and EPA. Historically, human operators used manual mapping in monitoring or detecting an object using remote sensing sensors, e.g., multi-spectral images. Deep learning algorithms can automatically identify and learn multi-spectral, multi-temporal, and multi-modality features of remote sensing datasets while detecting objects. They are also known as black box models because they lack clarity in decision-making. In this paper, we compared two explainable AI models, Grad-CAM and Score-CAM, to interpret the decisions in two multispectral datasets: UC-Merced and EuroSat. These datasets contain classes of land use, such as airplanes and buildings, and objects related to land cover, like forests and rivers. In the empirical analysis, we identified that ScoreCAM performed better than GradCAM based on three evaluation metrics: ROAD, SIC, and infidelity in UC-Merced and EuroSat datasets. |
---|---|
DOI: | 10.1109/UEMCON59035.2023.10316012 |