Yazar "Ozacar, Kasim" seçeneğine göre listele
Listeleniyor 1 - 5 / 5
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe 3D Selection Techniques for Mobile Augmented Reality Head-Mounted Displays(Oxford Univ Press, 2017) Ozacar, Kasim; Hincapie-Ramos, Juan David; Takashima, Kazuki; Kitamura, YoshifumiWe conducted a user study evaluating five selection techniques for augmented reality in optical see-through head-mounted displays (OST-HMDs). The techniques we studied aim at supporting mobile usage scenarios where the devices do not need external tracking tools or special environments, and therefore we selected techniques that rely solely on tracking technologies built into conventional commercially available OST-HMDs [i.e. gesture trackers, gaze tracking and inertial measurement units (IMUs)]. While two techniques are based on raycasting using built-in IMU sensing, three techniques are based on a hand-controlled 3D cursor using gestural tracking. We compared these techniques in an experiment with 12 participants. Our results show that raycasting using head orientation (i.e. IMU on the headset) was the fastest, fatigueless and the most preferable technique to select spatially arranged objects. We discuss the implications of our findings for design of interaction techniques in mobile OST-HMDs.Öğe HeCapsNet: An enhanced capsule network for automated heel disease diagnosis using lateral foot X-Ray images(Wiley, 2024) Taher, Osamah; Ozacar, KasimFoot pain, particularly caused by heel spurs and Sever's disease, significantly impacts mobility and daily activities for many people. These diseases are traditionally diagnosed by orthopedic specialists using X-ray images of the lateral foot. In certain situations, the absence of specialists requires the adoption of AI-based methods; however, the lack of a dataset hinders the use of AI for the preliminary diagnosis of these diseases. Therefore, this study first presents a novel dataset consisting of 3956 annotated lateral foot X-ray images and uses the original capsule network (CapsNet) to automatically detect and classify heel bone diseases. The low accuracy of 73.99% of CapsNet due to the low extraction feature layers led us to search for a new model. For this reason, this paper also proposes a new enhanced capsule network (HeCapsNet) by adjusting the features extraction layers, adding extra convolutional layers, using he normal kernel initializer instead of normal and utilizing the same padding scheme to perform better with medical images. Evaluating the performance of the proposed model, higher accuracy rates are achieved, including 97.29% for balanced data, 94.19% for imbalanced data, area under the curve (AUC) of 98.69%, and a fivefold cross-validation accuracy of 95.77%. We then compared our proposed model with state-of-the-art modified CapsNet models using various datasets (MNIST, Fashion-MNIST, CIFAR10, and brain tumor). HeCapsNet performed similarly to modified CapsNets on relatively simple non-medical datasets such as MNIST and Fashion-MNIST, but performed better on more complex medical datasets.Öğe MedCapsNet: A modified Densenet201 model integrated with capsule network for heel disease detection and classification(Cell Press, 2024) Taher, Osamah; Ozacar, KasimConditions affecting the heel bone, such as heel spurs and sever's disease, pose significant challenges to patients' daily activities. While orthopedic and traumatology doctors rely on foot X-rays for diagnosis, there is a need for more AI-based detection and classification of these conditions. Therefore, this study addresses this need by proposing MedcapsNet, a novel hybrid capsule model combining modified DenseNet201 with a capsule network, designed to accurately detect and classify heel bone diseases utilizing lateral heel x-ray foot images. We conducted a comprehensive series of experiments on the proposed hybrid architecture with several datasets, including the Heel dataset, Breast BreaKHis v1, HAM10000 skin cancer dataset, and Jun Cheng Brain MRI dataset. The first experiment evaluates the proposed model for heel diseases, while the other experiments evaluate the model on a range of medical datasets to demonstrate its performance over existing studies. On the heel dataset, MedCapsNet achieves an accuracy of 96.38%, AUC of 98.35% without data augmentation, cross-validation accuracy of 95.69%, and AUC of 98.87%. The proposed model, despite employing a fixed architecture and hyperparameters, outperformed other models across four distinct datasets, including MRI, X-ray, and microscopic images with various diseases. This is notable because different types of medical image datasets typically require different architectures and hyperparameters to achieve optimal performance.Öğe SmartEscape: A Mobile Smart Individual Fire Evacuation System Based on 3D Spatial Model(Mdpi, 2018) Atila, Umit; Ortakci, Yasin; Ozacar, Kasim; Demiral, Emrullah; Karas, Ismail RakipWe propose SmartEscape, a real-time, dynamic, intelligent and user-specific evacuation system with a mobile interface for emergency cases such as fire. Unlike past work, we explore dynamically changing conditions and calculate a personal route for an evacuee by considering his/her individual features. SmartEscape, which is fast, low-cost, low resource-consuming and mobile supported, collects various environmental sensory data and takes evacuees' individual features into account, uses an artificial neural network (ANN) to calculate personal usage risk of each link in the building, eliminates the risky ones, and calculates an optimum escape route under existing circumstances. Then, our system guides the evacuee to the exit through the calculated route with vocal and visual instructions on the smartphone. While the position of the evacuee is detected by RFID (Radio-Frequency Identification) technology, the changing environmental conditions are measured by the various sensors in the building. Our ANN (Artificial Neural Network) predicts dynamically changing risk states of all links according to changing environmental conditions. Results show that SmartEscape, with its 98.1% accuracy for predicting risk levels of links for each individual evacuee in a building, is capable of evacuating a great number of people simultaneously, through the shortest and the safest route.Öğe VRArchEducation: Redesigning building survey process in architectural education using collaborative virtual reality(Pergamon-Elsevier Science Ltd, 2023) Ozacar, Kasim; Ortakci, Yasin; Kucukkara, Muhammed YusufArchitectural education requires students to work as a group under the supervision of a teacher in the same physical environment since they need interaction to learn how to do a set of measurements in practice. However, a number of obstacles, such as pandemic situations, weather conditions, and the crowdedness of working sites, do not allow them to be and work together in a physical environment. Existing digital solutions, such as online and distance education, do not provide the required immersive and collaborative learning environment to assist students to practice in the same virtual environment. Therefore, the main aim of this study is to develop an immersive architectural educational environment, named VRArchEducation, using Virtual Reality (VR). Specifically, we design and implement a system that allows students and teachers to enjoy direct and simultaneous interaction through their virtual avatars regardless of their current physical location. Furthermore, the proposed VRArchEducation will enable users to have hands-on immersive experience while performing a set of measurement tasks. VRArchEducation presents four virtual fundamental measurement tools: water level hose, plumb, measurement tape, and a sketching board. Using these tools, students can perform a building survey process together in the VRArchEducation system just like in a traditional class environment. We conduct a user study to evaluate the system's effectiveness, success, accuracy, and usability. The proposed VRArchEducation has a great potential to be used in architectural education as an alternative to the traditional environment.(c) 2023 Elsevier Ltd. All rights reserved.