Yazar "Saleh, Abbadullah . H." seçeneğine göre listele
Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A Dynamic Circular Hough Transform Based Iris Segmentation(Springer International Publishing Ag, 2023) Saleh, Abbadullah . H.; Menemencioglu, O. . GuzhanIris segmentation in the case of eye diseases is a very challenging task in the computer vision field. The current study introduces a novel iris segmentation method based on an adaptive illumination correction and a modified circular Hough transform algorithm. Some morphological operations are used to create the illumination correction function, while the pupil localization process is used to detect the Hough circle radius range to obtain the iris circle. Warsaw BioBase-Disease-Iris dataset V1.0 contains 684 images is used to verify the proposed methodology. The proposed approach results show that the true segmentation rate is 90.5%, and the main problem of iris segmentation is the absence of complete or part of the iris in some diseases like blindness, rubeosis, synechiae and retinal detachment.Öğe Multimodal Fusion for Enhanced Semantic Segmentation in Brain Tumor Imaging: Integrating Deep Learning and Guided Filtering Via Advanced 3D Semantic Segmentation Architectures(Wiley, 2024) Saleh, Abbadullah . H.; Atila, Uemit; Menemencioglu, OguzhanBrain tumor segmentation is paramount in medical diagnostics. This study presents a multistage segmentation model consisting of two main steps. First, the fusion of magnetic resonance imaging (MRI) modalities creates new and more effective tumor imaging modalities. Second, the semantic segmentation of the original and fused modalities, utilizing various modified architectures of the U-Net model. In the first step, a residual network with multi-scale backbone architecture (Res2Net) and guided filter are employed for pixel-by-pixel image fusion tasks without requiring any training or learning process. This method captures both detailed and base elements from the multimodal images to produce better and more informative fused images that significantly enhance the segmentation process. Many fusion scenarios were performed and analyzed, revealing that the best fusion results are attained when combining T2-weighted (T2) with fluid-attenuated inversion recovery (FLAIR) and T1-weighted contrast-enhanced (T1CE) with FLAIR modalities. In the second step, several models, including the U-Net and its many modifications (adding attention layers, residual connections, and depthwise separable connections), are trained using both the original and fused modalities. Further, a Model Selection-based fusion of these individual models is also considered for more enhancement. In the preprocessing step, the images are resized by cropping them to decrease the pixel count and minimize background interference. Experiments utilizing the brain tumor segmentation (BraTS) 2020 dataset were performed to verify the efficiency and accuracy of the proposed methodology. The Model Selection-based fusion model achieved an average Dice score of 88.4%, an individual score of 91.1% for the whole tumor (WT) class, an average sensitivity score of 86.26%, and a specificity score of 91.7%. These results prove the robustness and high performance of the proposed methodology compared to other state-of-the-art methods.Öğe Study the effect of eye diseases on the performance of iris segmentation and recognition using transfer deep learning methods(Elsevier - Division Reed Elsevier India Pvt Ltd, 2023) Saleh, Abbadullah . H.; Menemencioglu, OguzhanA new deep learning-based iris recognition system is presented in the current study in the case of eye disease. Current state of art iris segmentation is either based on traditional low accuracy algorithms or heavy-weight deep-based models. In the current study segmentation section, a new iris segmentation method based on illumination correction and a modified circular Hough transform is proposed. The current method also performs a post-processing step to minimize the false positives. Besides, a ground truth of iris images is constructed to evaluate the segmentation accuracy. Many deep learning models (GoogleNet, Inception_ResNet, XceptionNet, EfficientNet, and ResNet50) are applied through the recognition step using the transfer learning approach. In the experiment part, two eye disease-based datasets are used. 684 iris images of individuals with multiple ocular diseases from the Warsaw BioBase V1 and 1,793 iris images from the Warsaw BioBase V2 are also used. The CASIA V3 Interval Iris dataset, which contains 2,639 photographs of healthy iris, is used to train deep models once, and then the transfer learning of this normal-based eye dataset is used to retrain the same deep models using Warsaw BioBase datasets. Different scenarios for training and evaluating participants are used during experiments. The trained models are evaluated using validation accuracy, training time, TPR, FNR, PPR, FDR, and test accuracy. The best accuracies are 98.5% and 97.26%, which are recorded by the ResNet50 (2-layer of transfer learning) model trained on Warsaw BioBase V1 and V2, respectively. Results indicate that the effect of eye diseases is concentrated on the segmentation phase. For recognition, no significant impact is recognized. Some disease that affects the structure (bloody eyes, trauma, iris pigment) can affect the iris recognition step partially. Our study is compared with similar studies in the case of eye diseases. The comparison proves the efficiency and high performance of the proposed methodology against all previous models on the same iris datasets.