PET Guided Attention Network for Lung Tumor Segmentation from PET/CT images that accommodates for missing PET images
Open access
Author
Date
2020-05-08Type
- Master Thesis
ETH Bibliography
yes
Altmetrics
Abstract
CT and PET scanning are vital imaging protocols for the detection and diagnosis of lung cancer. Both modali- ties provide complementary information on the status of tumors in lung cancer. CT modality provides structural and anatomical information, and PET modality provides functional and metabolic information about the tu- mors. Recently, machine learning-based Computer-Aided Detection and Diagnosis systems (CAD systems) have been devised to automate the lung cancer diagnosis by taking in the above modalities in paired format, i.e., corresponding PET and CT modalities for each scan. However, considering the high expense of PET scanning, it is common not to have paired data. Usually, in such scenarios, the CAD systems discard either the superflu- ous CT modalities for which PET modalities are missing, or discard the few available PET modalities. Thus, the CAD systems fail to make the best use of incomplete PET/CT data. In this thesis, we propose a novel visual soft attention-based deep learning model to utilize the incomplete PET/CT data best. The proposed model benefits from the available PET modalities but does not mandate the availability of PET modalities for each corresponding CT modality. We evaluate our model and compare against several baseline solutions on an in-house dataset consisting of annotated whole-body PET/CT scans of lung cancer patients. We demonstrate that our model successfully encompasses two discrete models that utilize CT-alone and PET/CT paired data. In the presence of PET modality, our model performs at par with a model trained on PET/CT paired data. In the absence of PET modality, our same model performs at par with a model trained on CT data. Additionally, we present the significance of our model in scenarios when PET modalities are very scarce compared to CT modalities. Through a thorough ablation study, we demonstrate that our model can make efficient utilization of incomplete PET/CT data. We attribute the robustness of the model to our proposed attention mechanism and present its effectiveness through qualitative results. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000420233Publication status
publishedPublisher
ETH ZurichSubject
Machine Learning; Biomedical Image segmentation; Attentiom networks; Missing PET/CT imagesOrganisational unit
09670 - Vogt, Julia / Vogt, Julia
More
Show all metadata
ETH Bibliography
yes
Altmetrics