Yazar "Rashid, Pshtiwan Qader Rashid" seçeneğine göre listele
Listeleniyor 1 - 1 / 1
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe MEDICAL IMAGE CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS(2024-07) Rashid, Pshtiwan Qader RashidThe implementation of deep learning methods in medical image classification domain has yielded many progress in developing reliable systems for diagnosis of diseases. Researchers have conducted numerous studies to develop models with improved accuracy. Computed tomography (CT) scans served as an important imaging technique for quickly diagnosing lung diseases via deep learning techniques. This doctoral dissertation presents a novel approach for diagnosing COVID-19 disease with enhanced accuracy, as well as an innovative method for detecting COVID-19 by integrating the U-Net model with a Graph Convolutional Network (GCN) to develop a feature-extracted GCN (FGCN). We use the U-Net model for image segmentation and feature extraction. We utilize the derived characteristics to construct an adjacency matrix that represents the underlying graph structure. We also feed the original image and the image graph with the greatest kernel to the GCN. This technique involves employing graph convolutional networks (GCN) with different layer configurations and kernel sizes to extract important features from CT scan images. To generate integrated input graph data, we combine these graphs and feed them into a graph convolutional network (GCN), which also incorporates a dropout layer to minimize overfitting during the COVID-19 diagnosis. Unlike previous studies that only took deep features from convolutional filters and pooling layers without considering the nodes' spatial connectivity, we use graph convolutional networks (GCNs) for categorization and prediction. The developed model stands out as it is the first to view CT scans of the lungs as a graph of characteristics, categorized by a graph neural network model. Furthermore, it outperforms the latest methods proposed for COVID-19 detection in the literature. This allows us to identify spatial connectivity patterns, resulting in a significant improvement in association. Our study shows that the suggested structure, called the feature-extracted graph convolutional network (FGCN), is better at finding lung diseases than other recently suggested deep learning structures that don't use graph visualizations. The suggested approach surpasses several transfer learning models frequently employed for medical image diagnosis, emphasizing the capacity of the graph representation to abstract information beyond what traditional methods can do. Furthermore, we contrast the proposed FGCN model with six widely used transfer learning models: DenseNet201, EfficientNetB0, InceptionV3, NasNet Mobile, ResNet50, and VGG16. We observe that the FGCN outperforms various transfer learning models. These results highlight the capacity of the graph-induced approach to represent abstract concepts, making it appropriate for similar medical diagnosis tasks.