Yazar "Ozcan, Caner" seçeneğine göre listele
Listeleniyor 1 - 20 / 21
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Channel selection and feature extraction on deep EEG classification using metaheuristic and Welch PSD(Springer, 2022) Cizmeci, Huseyin; Ozcan, Caner; Durgut, RafetBrain computer interfaces are important for different application domain such as medical, natural interfaces and entertainment. Besides the difficulty of gathering data from the human brain via different channel probs, preprocessing of data is another different and important task that must be solved in order to get better achievement. Selection of the most active channels is an important problem to achieve high classification accuracy. Metaheuristics are good solutions for selecting the optimal subset from the original set, as they have the ability to obtain an acceptable solution in a reasonable time. At the same time, it is necessary to use the correct feature extraction method so that the data can be properly represented. In addition, traditional deep learning methods used for emotion recognition ignore the spatial properties of EEG signals. This reduces the classification accuracy. In this study, we used artificial bee colony optimization algorithm on the seed dataset to increase the classification accuracy. We implemented and tested four different variations of this algorithm. Then, we extracted the features of the obtained channels with the Welch PSD method. We used enhanced capsule network as a machine learning algorithm and showed the best configuration to solve the problem. At the end of the process, 99.98% training and 99.83% test accuracy rates were obtained.Öğe Detection of the separated endodontic instrument on periapical radiographs using a deep learning-based convolutional neural network algorithm(Wiley, 2024) Ozbay, Yagiz; Kazangirler, Buse Yaren; Ozcan, Caner; Pekince, AdemThe study evaluated the diagnostic performance of an artificial intelligence system to detect separated endodontic instruments on periapical radiograph radiographs. Three hundred seven periapical radiographs were collected and divided into 222 for training and 85 for testing to be fed to the Mask R-CNN model. Periapical radiographs were assigned to the training and test set and labelled on the DentiAssist labeling platform. Labelled polygonal objects had their bounding boxes automatically generated by the DentiAssist system. Fractured instruments were classified and segmented. As a result of the proposed method, the mean average precision (mAP) metric was 98.809%, the precision value was 95.238, while the recall reached 98.765 and the f1 score 96.969%. The threshold value of 80% was chosen for the bounding boxes working with the Intersection over Union (IoU) technique. The Mask R-CNN distinguished separated endodontic instruments on periapical radiographs.Öğe Early-exit Optimization Using Mixed Norm Despeckling for SAR Images(Ieee, 2015) Ozcan, Caner; Sen, Baha; Nar, FatihSpeckle noise which is inherent to Synthetic Aperture Radar (SAR) imaging obstructs various image exploitation tasks such as edge detection, segmentation, change detection, and target recognition. Speckle reduction is generally used as a first step which has to smooth out homogeneous regions while preserving edges and point scatterers. In remote sensing applications, efficiency of computational load and memory consumption of despeckling must be improved for SAR images. In this paper, an early-exit total variation approach is proposed and this approach combines the l(1)-norm and the l(2)-norm in order to improve despeckling quality while keeping execution times of algorithm reasonably short. Speckle reduction performance, execution time and memory consumption are shown using spot mode SAR images.Öğe EEG Based Emotion Recognition with Convolutional Neural Networks(Ieee, 2020) Ozcan, Caner; Cizmeci, HnseyinThe use of multichannel electroencephalography (EEG) signals has become increasingly common in emotion recognition. However, studies have shown that due to the complexity of EEG signals, even the signals recorded from the same person may be disturbed. Therefore, EEG signals from the human brain need to be accurately and consistently analyzed and processed. With the method based on the Welch power spectral density estimation and a convolutional neural network, a high degree of classification accuracy was obtained on the SEED EEG dataset.Öğe Enhanced deep capsule network for EEG-based emotion recognition(Springer London Ltd, 2023) Cizmeci, Huseyin; Ozcan, CanerRecently, it has become very popular to use electroencephalogram (EEG) signals in emotion recognition studies. But, EEG signals are much more complex than image and audio signals. There may be inconsistencies even in signals recorded from the same person. Therefore, EEG signals obtained from the human brain must be analyzed and processed accurately and consistently. In addition, traditional algorithms used to classify emotion ignore the neighborhood relationship and hierarchical order within the EEG signals. In this paper, a method including selection of suitable channels from EEG data, feature extraction by Welch power spectral density estimation of selected channels and enhanced capsule network-based classification model is presented. The most important innovation of the method is to adjust the architecture of the capsule network to adapt to the EEG signals. Thanks to the proposed method, 99.51% training and 98.21% test accuracy on positive, negative and neutral emotions were achieved in the Seed EEG dataset. The obtained results were also compared and evaluated with other state-of-the-art methods. Finally, the method was tested with Dreamer and Deap EEG datasets.Öğe An enhanced tooth segmentation and numbering according to FDI notation in bitewing radiographs(Pergamon-Elsevier Science Ltd, 2022) Tekin, Buse Yaren; Ozcan, Caner; Pekince, Adem; Yasa, YasinBitewing radiographic imaging is an excellent diagnostic tool for detecting caries and restorations that are difficult to view in the mouth, particularly at the molar surfaces. Labeling radiological images by an expert is a labor-intensive, time-consuming, and meticulous process. A deep learning-based approach has been applied in this study so that experts can perform dental analyzes successfully, quickly, and efficiently. Computer-aided applications can now detect teeth and number classes in bitewing radiographic images automatically. In the deep learning-based approach of the study, the neural network has a structure that works according to regions. A region-based automatic segmentation system that segments each tooth using masks to help to assist analysis as given to lessen the effort of experts. To acquire precision and recall on a test dataset, Intersection Over Union value is determined by comparing the model's classified and ground-truth boxes. The chosen IOU value was set to 0.9 to allocate bounding boxes to the class scores. Mask R-CNN is a method that serves as instance segmentation and predicts a pixel-to-pixel segmentation mask when applied to each Region of Interest. The tooth numbering module uses the FDI notation, which is widely used by dentists, to classify and number dental items found as a result of segmentation. According to the experimental results were reached 100% precision and 97.49% mAP value. In the tooth numbering, were obtained 94.35% precision and 91.51% as an mAP value. The performance of the Mask R-CNN method used has been proven by comparing it with other state-of-the-art methods.Öğe Fast Feature Preserving Despeckling(Ieee, 2014) Ozcan, Caner; Sen, Baha; Nar, FatihSynthetic Aperture Radar (SAR) images contain high amount of speckle noise which causes edge detection, shape analysis, classification, segmentation, change detection and target recognition tasks become more difficult. To overcome such difficulties, smoothing of homogenous regions while preserving point scatterers and edges during speckle reduction is quite important. Besides, due to huge size of SAR images in remote sensing applications efficiency of computational load and memory consumption must be further improved. In this paper, a parallel computational approach is proposed for the Feature Preserving Despeckling (FPD) method which is chosen due to its success in speckle reduction. Speckle reduction performance, execution time and memory consumption of the proposed Fast FPD (FFPD) method is shown using spot mode SAR images.Öğe Fast Text Classification with Naive Bayes Method on Apache Spark(Ieee, 2017) Ogul, Iskender Ulgen; Ozcan, Caner; Hakdagli, OzlemThe increase in the number of devices and users online with the transition of Internet of Things (IoT), increases the amount of large data exponentially. Classification of ascending data, deletion of irrelevant data, and meaning extraction have reached vital importance in today's standards. Analysis can be done in various variations such as Classification of text on text data, analysis of spam, personality analysis. In this study, fast text classification was performed with machine learning on Apache Spark using the Naive Bayes method. Spark architecture uses a distributed in-memory data collection instead of a distributed data structure presented in Hadoop architecture to provide fast storage and analysis of data. Analyzes were made on the interpretation data of the Reddit which is open source social news site by using the Naive Bayes method. The results are presented in tables and graphsÖğe Fast texture classification of denoised SAR image patches using GLCM on Spark(Tubitak Scientific & Technological Research Council Turkey, 2020) Ozcan, Caner; Ersoy, Okan; Ogul, Iskender UlgenClassification of a synthetic aperture radar (SAR) image is an essential process for SAR image analysis and interpretation. Recent advances in imaging technologies have allowed data sizes to grow, and a large number of applications in many areas have been generated. However, analysis of high-resolution SAR images, such as classification, is a time-consuming process and high-speed algorithms are needed. In this study, classification of high-speed denoised SAR image patches by using Apache Spark clustering framework is presented. Spark is preferred due to its powerful open-source cluster-computing framework with fast, easy-to-use, and in-memory analytics. Classification of SAR images is realized on patch level by using the supervised learning algorithms embedded in the Spark machine learning library. The feature vectors used as the classifier input are obtained using gray-level cooccurrence matrix which is chosen to quantitatively evaluate textural parameters and representations. SAR image patches used to construct the feature vectors are first applied to the noise reduction algorithm to obtain a more accurate classification accuracy. Experimental studies were carried out using naive Bayes, decision tree, and random forest algorithms to provide comparative results, and significant accuracies were achieved. The results were also compared with a state-of-the-art deep learning method. TerraSAR-X images of high-resolution real-world SAR images were used as data.Öğe GPU efficient SAR image despeckling using mixed norms(Spie-Int Soc Optical Engineering, 2014) Ozcan, Caner; Sen, Baha; Nar, FatihSpeckle noise which is inherent to Synthetic Aperture Radar (SAR) imaging obstructs various image exploitation tasks such as edge detection, segmentation, change detection, and target recognition. Therefore, speckle reduction is generally used as a first step which has to smooth out homogeneous regions while preserving edges and point scatterers. Traditional speckle reduction methods are fast and their memory consumption is insignificant. However, they are either good at smoothing homogeneous regions or preserving edges and point scatterers. State of the art despeckling methods are proposed to overcome this trade-off. However, they introduce another trade-off between denoising quality and resource consumption, thereby higher denoising quality requires higher computational load and/or memory consumption. In this paper, a local pixel-based total variation (TV) approach is proposed, which combines l(2)-norm and l(1)-norm in order to improve despeckling quality while keeping execution times reasonably short. Pixel-based approach allows efficient computation model with relatively low memory consumption. Their parallel implementations are also more efficient comparing to global TV approaches which generally require numerical solution of sparse linear systems. However, pixel-based approaches are trapped to local minima frequently hence despeckling quality is worse comparing to global TV approaches. Proposed method, namely mixed norm despeckling (MND), combines l(2)-norm and l(1)-norm in order to improve despeckling performance by alleviating local minima problem. All steps of the MND are parallelized using OpenMP on CPU and CUDA on GPU. Speckle reduction performance, execution time and memory consumption of the proposed method are shown using synthetic images and TerraSAR-X spot mode SAR images.Öğe An Image Fusion Method of SAR and Optical Images, Based on Image Intensity Fields, by Reducing the Effect of Speckle Noise(Budapest Tech, 2024) Gencay, Semih; Ozcan, CanerThis study proposes an improved fusion method, that takes advantage of the combined strengths of existing fusion methods. First, current methods are compared using a fusion of noisy images from the Synthetic Aperture Radar (SAR) database, with optical images acquired at the same location and time. The obtained image and metric results showed that combining optical images with de-noised SAR provides better performance. Experiments have also shown that removing noise in SAR data causes the loss of important data in images. The proposed method divides the image into small patches in the noise removal phase. By calculating the standard deviation of these sub-patches, a different noise reduction ratio is applied for each region, thus preventing the loss of important detail features in the image. The proposed method has been compared with fusion methods recognized in the existing literature. Experimental results demonstrate that the proposed method, performs better than current fusion methods. The proposed method also yields better metric results, over other methods and it also eliminates the noise problems, often present in the images.Öğe Investigation of the performance of LU decomposition method using CUDA(Elsevier Science Bv, 2012) Ozcan, Caner; Sen, BahaIn recent years, parallel processing has been widely used in the computer industry. Software developers, have to deal with parallel computing platforms and technologies to provide novel and rich experiences. We present a novel algorithm to solve dense linear systems using Compute Unified Device Architecture (CUDA). High-level linear algebra operations require intensive computation. In this study Graphics Processing Units (GPU) accelerated implementation of LU linear algebra routine is implemented. LU decomposition is a decomposition of the form A=LU where A is a square matrix. The main idea of the LU decomposition is to record the steps used in Gaussian elimination on A in the places where the zero is produced. L and U are lower and upper triangular matrices respectively. This means that L has only zeros above the diagonal and U has only zeros below the diagonal. We have worked to increase performance with proper data representation and reducing row operations on GPU. Because of the high arithmetic throughput of GPUs, initial results from experiments promised a bright future for GPU computing. It has been shown useful for scientific computations. GPUs have high memory bandwidth and more floating point units as compared to the CPU. We have tried our study on different systems that have different GPUs and CPUs. The computation studies were also evaluated for different linear systems. When we compared the results obtained from both systems, a better performance was obtained with GPU computing. According to results, GPU computation approximately worked 3 times faster than the CPU computation. Our implementation provides significant performance improvement so we can easily use it to solve dense linear system. (C) 2011 Published by Elsevier Ltd.Öğe KEYWORD EXTRACTION BASED ON WORD SYNONYMS USING WORD2VEC(Ieee, 2019) Ogul, Iskender Ulgen; Ozcan, Caner; Hakdagli, OzlemNowadays, the data revealed by the online individuals are increasing exponentially. The raw information that increasing data holds, transformed into meaningful outputs using machine learning and deep learning methods. Generally, supervised learning methods are used for information extraction and classification. Supervised learning is based on the training set that classification algorithms are trained. In the proposed approach, keyword extraction solution is proposed to classify text data more convenient. The developed solution is based on the Word2Vec algorithm, which works by taking into consideration the semantic meaning of the words unlike general approaches that based on word frequency. A new approach, word embedding algorithm named Word2Vec, works by calculating the word weights, semantic relationship, and the final weights of vectors. The obtained keywords are trained with Name Bayes and Decision Trees methods and the performance of the proposed method is shown by classification example.Öğe A new effective denoising filter for high density impulse noise reduction(Tubitak Scientific & Technological Research Council Turkey, 2022) Elawady, Iman; Ozcan, CanerToday, thanks to the rapid development of technology, the importance of digital images is increasing. However, sensor errors that may occur during the acquisition, interruptions in the transmission of images and errors in storage cause noise that degrades data quality. Salt and pepper noise, a common impulse noise, is one of the most well-known types of noise in digital images. This noise negatively affects the detailed analysis of the image. It is very important that pixels affected by noise are restored without loss of image fine details, especially at high level of noise density. Although many filtering algorithms have been proposed to remove noise, the enhancement of images with high noise levels is still complex, not efficient or requires very long runtime. In this paper, we propose an effective denoising filter that can restore the image effectively in terms of quality and speed with less complexity for high density noise level. In the experimental studies, we compare the denoising results of the proposed method with other state-of-the-art methods and the proposed algorithm is quantitatively and visually comparable to these algorithms when the noise intensity is up to 90%.Öğe Numbering teeth in panoramic images: A novel method based on deep learning and heuristic algorithm(Elsevier - Division Reed Elsevier India Pvt Ltd, 2023) Karaoglu, Ahmet; Ozcan, Caner; Pekince, Adem; Yasa, YasinDental problems are one of the most common health problems for people. To detect and analyze these problems, dentists often use panoramic radiographs that show the entire mouth and have low radiation exposure and exposure time. Analyzing these radiographs is a lengthy and tedious process. Recent studies have ensured dental radiologists can perform the analyses faster with various artificial intelligence sup-ports. In this study, the numbering performance of Mask R-CNN and our heuristic algorithm-based method was verified on panoramic dental radiographs according to the Federation Dentaire Internationale (FDI) system. Ground-truth labelling of images required for training the deep learning algorithm was performed by two dental radiologists using the web-based labelling software DentiAssist created by the first author. The dataset was created from 2702 anonymized panoramic radio-graphs. The dataset is divided into 1747, 484, and 471 images, which serve as training, validation, and test sets. The dataset was validated using the k-fold cross-validation method (k = 5). A three-step heuristic algorithm was developed to improve the Mask R-CNN segmentation and numbering results. As far as we know, our study is the first in the literature to use a heuristic method in addition to traditional deep learning algorithms in detection, segmentation and numbering studies in panoramic radiography. The experimental results show that the mAp (@IOU = 0.5), precision, recall and f1 scores are 92.49%, 96.08%, 95.65% and 95.87%, respectively. The results of the learning-based algorithm were improved by more than 4%. In our research, we discovered that heuristic algorithms could improve the accuracy of deep learning-based algorithms. Our research will significantly reduce dental radiologists' workload, speed up diagnostic processes, and improve the accuracy of deep learning systems.(c) 2022 Karabuk University. Publishing services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).Öğe Optimization based manifold embedding for hyperspectral image classification and visualization(Taylor & Francis Ltd, 2021) Yildirim, Mehmet Zahid; Ozcan, Caner; Ersoy, OkanRemote sensing and interpretation of hyperspectral images are becoming an increasingly important field of research. High dimensional hyperspectral images consist of hundreds of bands and reflect the properties of different materials. The need for more detail about objects and the improvement of sensor resolutions have resulted in the generation of higher size hyperspectral data. Many years of research have shown that there are many difficulties in the pre-processing of these data due to their high dimensionality. Recent studies have revealed that manifold learning techniques are a very important solution to this problem. However, as the complexity of the data increases, the performance of these methods cannot reach a sufficient level. This letter proposes a particle swarm-based multidimensional field embedding method inspired by the force field formulation to increase the performance. Detailed comparative analyses of the proposed method were made for Botswana and Kennedy Space Center (KSC) data. It is also shared in the results of other popular datasets. Experimental results show that the proposed method is superior to existing manifold embedding methods in classification accuracy and visualization of hyperspectral data. In addition, an optimization-based solution is presented to the problem of parameter determination of existing embedding methods.Öğe Sparsity-Driven Despeckling for SAR Images(Ieee-Inst Electrical Electronics Engineers Inc, 2016) Ozcan, Caner; Sen, Baha; Nar, FatihSpeckle noise inherent in synthetic aperture radar (SAR) images seriously affects the result of various SAR image processing tasks such as edge detection and segmentation. Thus, speckle reduction is critical and is used as a preprocessing step for smoothing homogeneous regions while preserving features such as edges and point scatterers. Although state-of-the-art methods provide better despeckling compared with conventional methods, their resource consumption is higher. In this letter, a sparsitydriven total-variation (TV) approach employing l0-norm, fractional norm, or l(1)-norm to smooth homogeneous regions with minimal degradation in edges and point scatterers is proposed. Proposed method, sparsity-driven despeckling (SDD), is capable of using different norms controlled by a single parameter and provides better or similar despeckling compared with the state-of-the-art methods with shorter execution times. Despeckling performance and execution time of the SDD are shown using synthetic and real-world SAR images.Öğe Sparsity-Driven Despeckling Method with Low Memory Usage(Ieee, 2016) Ozcan, Caner; Sen, Baha; Nar, FatihSpeckle noise which is inherent to Synthetic Aperture Radar (SAR) imaging makes it difficult to detect targets and recognize spatial patterns on earth. Thus, despeckling is critical and used as a preprocessing step for smoothing homogeneous regions while preserving features such as edges and point scatterers. In this study, a low-memory version of the previously proposed sparsity-driven despeckling (SDD) method is proposed. All steps of the method are parallelized using OpenMP on CPU and CUDA on GPU. Execution time and despeckling performance are shown using real-world SAR images.Öğe STREAM TEXT DATA ANALYSIS ON TWITTER USING APACHE SPARK STREAMING(Ieee, 2018) Hakdagli, Ozlem; Ozcan, Caner; Ogul, Iskender UlgenWith today's developing technology, people's access to information and its production have reached a very fast level. These generated and obtained information are instantly created, entered into data systems and updated. Sources of streaming data can be transformed into valuable analysis results when they are handled with targeted methods. In this study, a text data field is determined to perform analysis on instantaneous generated data and Twitter, the richest platform for instant text data, is used. Twitter instantly generates a variety of data in large quantities and it presents it as open source using an API. A machine learning framework Apache Spark's stream analysis environment is used to analyze these resources. Situation analysis was performed using Support Vector Machine, Decision Trees and Logistic Regression algorithms presented under this environment. The results are presented in tables.Öğe Total Variation Based 3D Skull Segmentation(Ieee, 2016) Atasoy, Ferhat; Sen, Baha; Nar, Fatih; Ozcan, Caner; Bozkurt, IsmailSegmentation is widely used for determining tumor and other lesions and classifying tissues for various analysis purposes in medical images. However, being an illposed problem, there is no single segmentation method which can perform successfully for all kind of data. In this study, a novel total variation (TV) based skull segmentation method is proposed. Skull segmentation performance of the proposed method is shown using computed tomography (CT) images.