6533b7d4fe1ef96bd1261e50
RESEARCH PRODUCT
Deep Generative Model-Driven Multimodal Prostate Segmentation in Radiotherapy
Raabid HussainAlain LalandePaul WalkerGilles CréhangeKibrom Berihu Girumsubject
FOS: Computer and information sciencesComputer scienceComputer Vision and Pattern Recognition (cs.CV)medicine.medical_treatmentProstate segmentationFeature extractionComputer Science - Computer Vision and Pattern RecognitionComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONConvolutional neural network[SDV.IB.MN]Life Sciences [q-bio]/Bioengineering/Nuclear medicineConvolutional neural network030218 nuclear medicine & medical imaging03 medical and health sciencesProstate cancer0302 clinical medicineFOS: Electrical engineering electronic engineering information engineeringmedicineSegmentationArtificial neural networkbusiness.industryDeep learningImage and Video Processing (eess.IV)NoveltyDeep learningPattern recognitionElectrical Engineering and Systems Science - Image and Video Processingmedicine.diseaseTransfer learning3. Good healthRadiation therapyGenerative model030220 oncology & carcinogenesisEmbeddingArtificial intelligencebusinessCTMRIdescription
Deep learning has shown unprecedented success in a variety of applications, such as computer vision and medical image analysis. However, there is still potential to improve segmentation in multimodal images by embedding prior knowledge via learning-based shape modeling and registration to learn the modality invariant anatomical structure of organs. For example, in radiotherapy automatic prostate segmentation is essential in prostate cancer diagnosis, therapy, and post-therapy assessment from T2-weighted MR or CT images. In this paper, we present a fully automatic deep generative model-driven multimodal prostate segmentation method using convolutional neural network (DGMNet). The novelty of our method comes with its embedded generative neural network for learning-based shape modeling and its ability to adapt for different imaging modalities via learning-based registration. The proposed method includes a multi-task learning framework that combines a convolutional feature extraction and an embedded regression and classification based shape modeling. This enables the network to predict the deformable shape of an organ. We show that generative neural networkbased shape modeling trained on a reliable contrast imaging modality (such as MRI) can be directly applied to low contrast imaging modality (such as CT) to achieve accurate prostate segmentation. The method was evaluated on MRI and CT datasets acquired from different clinical centers with large variations in contrast and scanning protocols. Experimental results reveal that our method can be used to automatically and accurately segment the prostate gland in different imaging modalities.
year | journal | country | edition | language |
---|---|---|---|---|
2019-10-17 |