Multimodal Fusion for Enhanced Semantic Segmentation in Brain Tumor Imaging: Integrating Deep Learning and Guided Filtering Via Advanced 3D Semantic Segmentation Architectures

dc.authoridSALEH, ABBADULLAH .H/0000-0003-3019-5833
dc.contributor.authorSaleh, Abbadullah . H.
dc.contributor.authorAtila, Uemit
dc.contributor.authorMenemencioglu, Oguzhan
dc.date.accessioned2024-09-29T15:50:42Z
dc.date.available2024-09-29T15:50:42Z
dc.date.issued2024
dc.departmentKarabük Üniversitesien_US
dc.description.abstractBrain tumor segmentation is paramount in medical diagnostics. This study presents a multistage segmentation model consisting of two main steps. First, the fusion of magnetic resonance imaging (MRI) modalities creates new and more effective tumor imaging modalities. Second, the semantic segmentation of the original and fused modalities, utilizing various modified architectures of the U-Net model. In the first step, a residual network with multi-scale backbone architecture (Res2Net) and guided filter are employed for pixel-by-pixel image fusion tasks without requiring any training or learning process. This method captures both detailed and base elements from the multimodal images to produce better and more informative fused images that significantly enhance the segmentation process. Many fusion scenarios were performed and analyzed, revealing that the best fusion results are attained when combining T2-weighted (T2) with fluid-attenuated inversion recovery (FLAIR) and T1-weighted contrast-enhanced (T1CE) with FLAIR modalities. In the second step, several models, including the U-Net and its many modifications (adding attention layers, residual connections, and depthwise separable connections), are trained using both the original and fused modalities. Further, a Model Selection-based fusion of these individual models is also considered for more enhancement. In the preprocessing step, the images are resized by cropping them to decrease the pixel count and minimize background interference. Experiments utilizing the brain tumor segmentation (BraTS) 2020 dataset were performed to verify the efficiency and accuracy of the proposed methodology. The Model Selection-based fusion model achieved an average Dice score of 88.4%, an individual score of 91.1% for the whole tumor (WT) class, an average sensitivity score of 86.26%, and a specificity score of 91.7%. These results prove the robustness and high performance of the proposed methodology compared to other state-of-the-art methods.en_US
dc.description.sponsorshipTUEBIdot;TAK ULAKBIdot;Men_US
dc.description.sponsorshipFunding for Open Access is provided by TUEB & Idot;TAK ULAKB & Idot;M as part of the Wiley-TUEB & Idot;TAK ULAKB & Idot;M agreementen_US
dc.identifier.doi10.1002/ima.23152
dc.identifier.issn0899-9457
dc.identifier.issn1098-1098
dc.identifier.issue5en_US
dc.identifier.scopus2-s2.0-85201072717en_US
dc.identifier.scopusqualityQ2en_US
dc.identifier.urihttps://doi.org/10.1002/ima.23152
dc.identifier.urihttps://hdl.handle.net/20.500.14619/3695
dc.identifier.volume34en_US
dc.identifier.wosWOS:001289027900001en_US
dc.identifier.wosqualityN/Aen_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.language.isoenen_US
dc.publisherWileyen_US
dc.relation.ispartofInternational Journal of Imaging Systems and Technologyen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectguided filteren_US
dc.subjectimage fusionen_US
dc.subjectsemantic segmentationen_US
dc.subjecttumor detectionen_US
dc.subject3D U-Neten_US
dc.subjectguided filteren_US
dc.subjectimage fusionen_US
dc.subjectRes2Neten_US
dc.subjectsemantic segmentationen_US
dc.subjecttumor detectionen_US
dc.titleMultimodal Fusion for Enhanced Semantic Segmentation in Brain Tumor Imaging: Integrating Deep Learning and Guided Filtering Via Advanced 3D Semantic Segmentation Architecturesen_US
dc.typeArticleen_US

Dosyalar