Hierarchical Attention-Based Multiple Instance Learning Network for Patient-Level Lung Cancer Diagnosis

Lung cancer is the leading cause of cancer-related deaths worldwide, while the risk factors for lung cancer mortality can be significantly reduced if the accurate early diagnoses for small malignant lung nodules are possible. In this paper, we propose a hierarchical attention-based multiple instance...

Full description

Saved in:
Bibliographic Details
Published in:2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) pp. 1156 - 1160
Main Authors: Wang, Qingfeng, Zhou, Ying, Huang, Jun, Liu, Zhiqin, Li, Ling, Xu, Weiyun, Cheng, Jie-Zhi
Format: Conference Proceeding
Language:English
Published: IEEE 16-12-2020
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Lung cancer is the leading cause of cancer-related deaths worldwide, while the risk factors for lung cancer mortality can be significantly reduced if the accurate early diagnoses for small malignant lung nodules are possible. In this paper, we propose a hierarchical attention-based multiple instance learning (HA-MIL) framework for patient-level lung cancer diagnosis by introducing two-level cascaded attention mechanisms, one at nodule level and the other at attribute level. The proposed HA-MIL framework is constructed by aggregating important attribute representation into nodule representation and then aggregating important nodule representation into lung cancer representation. The experiments on the public Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset showed that the HA-MIL model performed significantly better than the previous approaches such as the higher-order transfer learning, instance-space MIL and embedding-space MIL, which demonstrated the effectiveness of hierarchical multiple instance learning based on two-level attentions. The results analysis suggested that the HA-MIL model also found the key nodules and attributes by higher attention weights, which were more interpretable for the model decision making.
DOI:10.1109/BIBM49941.2020.9313417