IJITCS Vol. 16, No. 5, Oct. 2024
Cover page and Table of Contents: PDF (size: 192KB)
REGULAR PAPERS
Hospitals are the primary hubs for healthcare service providers in Ethiopia; however, hospitals face significant challenges in adopting digital health information systems solutions due to disparate, non-interoperable systems and limited access. Information technology, especially via cloud computing, is crucial in healthcare for efficient data management, secure storage, real-time access to critical information, seamless provider communication, enhanced collaboration, and scalable IT infrastructure. This study investigated the challenges to standardizing smart and green healthcare information services and proposed a cloud-based model for overcoming them. We conducted a mixed-methods study in 11 public hospitals, employing quantitative and qualitative approaches with diverse stakeholders (N = 103). The data was collected through surveys, interviews, and technical observations by purposive quota sampling with the Raosoft platform and analyzed using IBM SPSS. Findings revealed several shortcomings in existing information systems, including limited storage, scalability, and security; impaired data sharing and collaboration; accessibility issues; no interoperability; ownership ambiguity; unreliable data recovery; environmental concerns; affordability challenges; and inadequate policy enforcement. Notably, hospitals lacked a centralized data management system, cloud-enabled systems for remote access, and modern data recovery strategies. Despite these challenges, 90.3% of respondents expressed interest in adopting cloud-enabled data recovery systems. However, infrastructure limitations, inadequate cloud computing/IT knowledge, lack of top management support, digital illiteracy, limited innovation, and data security concerns were identified as challenges to cloud adoption. The study further identified three existing healthcare information systems: paper-based methods, electronic medical catalog systems, and district health information systems2. Limitations of the paper-based method include error-proneness, significant cost, data fragmentation, and restricted remote access. Growing hospital congestion and carbon footprint highlighted the need for sustainable solutions. Based on these findings, we proposed a cloud-based model tailored to the Ethiopian context. This six-layered model, delivered as a Software-as-a-Service within a community cloud deployment, aims to improve healthcare services through instant access, unified data management, and evidence-based medical practices. The model demonstrates high acceptability and potential for improving healthcare delivery, and implementation recommendations are suggested based on the proposed model.
[...] Read more.Sentiment analysis on Twitter provides organizations and persons with quick and effective instrument to observe the public's perceptions of them and their competition. A modest number of assessment datasets have been produced in recent years to check the efficiency of sentiment analysis algorithms on Twitter. Researchers offer a review of eight publicly accessible as well as manually annotated assessment datasets for analyzing Twitter sentiment in this research. As a result of this evaluation, we demonstrate that is a widespread weakness of many when using these datasets performing at sentiment analysis the objective (entity) level is indeed the absence of different sentiment classifications across tweets as well as the objects contained in them.[1], As an example all of that "I love my iPhone but I despise my iPad." Could be marked with a made-by-mixing classify however the object iPhone contained within this Twitter post should be annotated with just a label with an optimism. To get around this restriction and enhance existing assessment We have datasets that provide STS-Gold a novel assessment of datasets in which tweets or objects (entities) remain tagged separately hence might show alternative opinion labels. Though research furthermore compares the various datasets on multiple characteristics such as an entire quantity of posts as well as vocabulary size and sparsity.[2] In addition, look at pair by pair relationships between these variables and how they relate to sentiment classifier performance on various data. In this study we used five different classifiers and compared them and, in our experiment, we found that the bagging ensemble classifier performed best among them and have an accuracy level of 94.2% for the GASP dataset and 91.3% for the STS-Gold dataset.
[...] Read more.Python is widely used in artificial intelligence (AI) and machine learning (ML) because of its flexibility, adaptability, rich libraries, active community, and broad environment, which makes it a popular choice for AI development. Python compatibility has already been examined with Java using TCP socket programming on both non-graphical and graphical user interfaces, which is highly essential to implement in the Jakarta Faces web application to grab potential competitive advantages. Python data analysis library modules such as numpy, pandas, and scipy, as well as visualization library modules such as Matplotlib and Seaborn, and machine-learning module Scikit-learn, are intended to be integrated into the Jakarta Faces web application. The research method uses similar TCP socket programming for the enhancement process, which allows instruction and data exchange between Python and Jakarta Faces web applications. The outcome of the findings emphasizes the significance of modernizing data science and machine learning (ML) workflows for Jakarta Faces web developers to take advantage of Python modules without using any third-party libraries. Moreover, this research provides a well-defined research design for an execution model, incorporating practical implementation procedures and highlighting the results of the innovative fusion of AI from Python into Jakarta Faces.
[...] Read more.Scheduling is an NP-hard problem, and metaheuristic algorithms are often used to find approximate solutions within a feasible time frame. Existing metaheuristic algorithms, such as ACO, PSO, and BOA address this problem either in cloud or fog environments. However, when these environments are combined into a hybrid cloud-fog environment, these algorithms become inefficient due to inadequate handling of local and global search strategies. This inefficiency leads to suboptimal scheduling across the cloud-fog environment because the algorithms fail to adapt effectively to the combined challenges of both environments. In our proposed Improved Butterfly Optimization Algorithm (IBOA), we enhance adaptability by dynamically updating the computation cost, communication cost, and total cost, effectively balancing both local and global search strategies. This dynamic adaptation allows the algorithm to select the best resources for executing tasks in both cloud and fog environments. We implemented our proposed approach in the CloudSim simulator and compared it with traditional algorithms such as ACO, PSO, and BOA. The results demonstrate that IBOA offers significant reductions in total cost, communication cost, and computation cost by 19.65%, 18.28%, and 25.41%, respectively, making it a promising solution for real-world cloud-fog computing (CFC) applications.
[...] Read more.The process of health insurance policy selection is a critical decision with far–reaching financial implications. The complexity of health insurance policy selection necessitates a structured approach to facilitate informed decision-making amidst numerous criteria and provider options. This study addresses the health insurance policy selection problem by employing a comprehensive methodology integrating Spherical Fuzzy Analytic Hierarchy Process (SF–AHP) and Combined Compromise Solution (CoCoSo) Algorithm. Eight experienced experts, four from academia and industry each, were engaged, and eleven critical factors were identified through literature review, survey, and expert opinions. SF–AHP was utilized to assign weights to these factors, with Claim settlement ratio (C9) deemed the most significant. Subsequently, CoCoSo Algorithm facilitated the ranking of insurance service providers, with alternative A6 emerging as the superior choice. The research undertakes sensitivity analysis, confirming the stability of the model across various scenarios. Notably, alternative A6 consistently demonstrates superior performance, reaffirming the reliability of the decision-making process. The study’s conclusion emphasizes the efficacy of the joint SF–AHP and CoCoSo approach in facilitating informed health insurance policy selection, considering multiple criteria and their interdependencies. Practical implications of the research extend to individuals, insurance companies, and policymakers. Individuals benefit from making more informed choices aligned with their healthcare needs and financial constraints. Insurance companies can tailor policies to customer preferences, enhancing competitiveness and customer satisfaction. Policymakers gain insights to inform regulatory decisions, promoting fair practices and consumer protection in the insurance market. This study underscores the significance of a structured approach in navigating the intricate health insurance landscape, offering practical insights for stakeholders and laying a foundation for future research advancements.
[...] Read more.Emotions are pivotal in the learning process, highlighting the importance of identifying students' emotional states within educational settings. While neural network models, particularly those rooted in deep learning, have demonstrated remarkable accuracy in detecting primary emotions like happiness, sadness, fear, disgust, and anger from facial expressions in videos, these emotions occur infrequently in learning environments. Conversely, cognitive emotions such as engagement, confusion, frustration, and boredom are significantly more prevalent, transpiring five times more frequently than basic emotions. However, unlike basic emotions which are relatively distinct, cognitive emotions present a subtler distinction, necessitating the utilization of more sophisticated models for accurate recognition. The proposed work presents an efficient Facial Expression Recognition (FER) model for monitoring the student engagement in a learning environment by considering their facial expressions like boredom, frustration, confusion and engagement. The proposed methodology includes certain pre-processing steps followed by facial expression recognition founded on Efficient-Net B3 CNN in which the learning parameters are optimized using Circle-Inspired Optimization Algorithm (CIOA). Finally, the post processing stage estimates the frame-wise group engagement level (GEL) of students based on certain expression labels. Based on the acquired results, it is noted that the suggested Efficient-Net B3 CNN-CIOA based FER model provides promising results in terms of accuracy by 99.5%, precision by 99.2%, recall by 99.5% and f1-score by 99.6%, when compared with some state-of-art facial expression recognition approaches. Also, the suggested approach computational complexity is very much less than the compared existing approaches.
[...] Read more.The advent of the study of Scene Text Detection and Recognition has exposed some significant challenges text recognition faces, such as blurred text detection. This study proposes a comparative model for detecting blurred text in wild scenes using independent component analysis (ICA) and enhanced genetic algorithm (E-GA) with support vector machine (SVM) and k-nearest neighbors (KNN) as classifiers. The proposed model aims to improve the accuracy of blurred text detection in challenging environments with complex backgrounds, noise, and illumination variations. The proposed model consists of three main stages: preprocessing, feature extraction, and classification. In the preprocessing stage, the input image is first preprocessed to remove noise and enhance edges using a median filter and a Sobel filter, respectively. Then, the blurred text regions are extracted using the Laplacian of Gaussian (LoG) filter. In the feature extraction stage, ICA is used to extract independent components from the blurred text regions. The extracted components are then fed into an E-GA-based feature selection algorithm to select the most discriminative features. The E-GA simply fine tunes the selection functionalities of the traditional GA using a bird approach. The selected features are then normalized and fed into the SVM and KNN classifiers. Experimental results on a benchmarking dataset (ICDAR 2019 LSVT) shows that the model outperforms state-of-the-art methods in terms of detection accuracy, precision, recall, and F1-score. The proposed model achieves an overall accuracy of 95.13% for SVM and 88.69% for KNN, which is significantly higher than the already existing methods which for SVM is 93%. In conclusion, the proposed model provides a promising approach for detecting blurred text in wild scenes. The combination of ICA, E-GA, and SVM/KNN classifiers enhances the robustness and accuracy of the detection system, which can be beneficial for a wide range of applications, such as text recognition, document analysis, and security systems.
[...] Read more.