International Journal of Information Technology and Computer Science (IJITCS)

ISSN: 2074-9007 (Print)

ISSN: 2074-9015 (Online)

DOI: https://doi.org/10.5815/ijitcs

Website: https://www.mecs-press.org/ijitcs

Published By: MECS Press

Frequency: 6 issues per year

Number(s) Available: 139

(IJITCS) in Google Scholar Citations / h5-index

IJITCS is committed to bridge the theory and practice of information technology and computer science. From innovative ideas to specific algorithms and full system implementations, IJITCS publishes original, peer-reviewed, and high quality articles in the areas of information technology and computer science. IJITCS is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of information technology and computer science applications.

 

IJITCS has been abstracted or indexed by several world class databases: Scopus, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..

Latest Issue
Most Viewed
Most Downloaded

IJITCS Vol. 17, No. 5, Oct. 2025

REGULAR PAPERS

A ViT-based Model for Detecting Kidney Stones in Coronal CT Images

By A. Cong Tran Huynh Vo-Thuy

DOI: https://doi.org/10.5815/ijitcs.2025.05.01, Pub. Date: 8 Oct. 2025

Detecting kidney stones in coronal CT images remains challenging due to the small size of stones, anatomical complexity, and noise from surrounding objects. To address these challenges, we propose a deep learning architecture that augments a Vision Transformer (ViT) with a pre-processing module. This module integrates CSPDarknet for efficient feature extraction, a Feature Pyramid Network (FPN), and Path Aggregation Network (PANet) for multi-scale context aggregation, along with convolutional layers for spatial refinement. Together, these trained components filter irrelevant background regions and highlight kidney-specific features before classification by ViT, thereby improving accuracy and efficiency. This design leverages ViT’s global context modeling while mitigating its sensitivity to irrelevant regions and limited data. The proposed model was evaluated on two coronal CT datasets (one public and one private dataset) comprising 6,532 images under six experimental scenarios with varying training and testing conditions. It achieved 99.3% accuracy, 98.7% F1-score, and 99.4% mAP@0.5, higher than both YOLOv10 and the baseline ViT. The model contains 61.2 million parameters and has a computational cost of 37.3 GFLOPs, striking a balance between ViT (86.0M, 17.6 GFLOPs) and YOLOv10 (22.4M, 92.0GFLOPs). Despite having more parameters than YOLOv10, the model achieved a lower inference time than YOLOv10, approximately 0.06 seconds per image on an NVIDIA RTX 3060 GPU. These findings suggest the potential of our approach as a foundation for clinical decision-support tools, pending further validation on heterogeneous and challenging clinical datasets such as small (<2 mm) or low-contrast stones.

[...] Read more.
Hand Gesture-controlled 2D Virtual Piano with Volume Control

By Vijayan R. Mareeswari V. Sarathi G. Sathya Nikethan R. V.

DOI: https://doi.org/10.5815/ijitcs.2025.05.02, Pub. Date: 8 Oct. 2025

The rise of virtual instruments has revolutionized music production, providing new avenues for creating music without the need for physical instruments. However, these systems rely on costly hardware, such as MIDI controllers, limiting accessibility. As an alternative, 3D gesture-based virtual instruments have been explored to emulate the immersive experience of MIDI controllers. Yet, these approaches introduce accessibility challenges by requiring specialized hardware, such as depth-sensing cameras and motion sensors. In contrast, 2D gesture systems using RGB cameras are more affordable but often lack extended functionalities. To address these challenges, this study presents a 2D virtual piano system that utilizes hand gesture recognition. The system enables accurate gesture-based control, real-time volume adjustments, control over multiple octaves and instruments, and automatic sheet music generation. OpenCV, an open-source computer vision library, and Google’s MediaPipe are employed for real-time hand tracking. The extracted hand landmark coordinates are normalized based on the wrist and scaled for consistent performance across various RGB camera setups. A bidirectional long short-term memory (Bi-LSTM) network is used to evaluate the approach. Experimental results show 95% accuracy on a public Kaggle dynamic gesture dataset and 97% on a custom-designed dataset for virtual piano gestures. Future work will focus on integrating the system with Digital Audio Workstations (DAWs), adding advanced musical features, and improving scalability for multiple-player use.

[...] Read more.
Application of Multi-Attribute Utility Theory in a Decision Support System for Selecting the Best Budget Hotels in Samarinda

By Anik Hanifatul Azizah Heny Pratiwi Reza Andrea Achmad Afandi Sri Rakhmawati Dewi Safitriani Nurhasanah Nurhasanah

DOI: https://doi.org/10.5815/ijitcs.2025.05.03, Pub. Date: 8 Oct. 2025

This study addresses the challenge faced by tourists, companies, travel agents, and tourism agencies in selecting the ideal hotel in Samarinda, given the variety of available options. The city boasts numerous hotels with differing facilities, room types, rates, and locations, which can complicate decision-making without adequate information. To provide a solution, this research introduces a Decision Support System (DSS) that employs the Multi-Attribute Utility Theory (MAUT) method for hotel assessment. By evaluating hotels based on key attributes like price, amenities, service quality, and location, the system offers a comprehensive, objective approach to determining the best affordable hotels. The study contributes significantly to the hospitality sector by presenting a practical tool that simplifies the hotel selection process and ensures that choices align with the preferences of the visitors.

[...] Read more.
Autism Spectrum Disorder Equipped with Convolutional-cum-visual Attention Mechanism

By Ayesha Shaik Lavish R. Jain Balasundaram A.

DOI: https://doi.org/10.5815/ijitcs.2025.05.04, Pub. Date: 8 Oct. 2025

This research work aims to utilize deep learning techniques to identify autism traits in children based on their facial features. By combining traditional convolutional approaches with attention layers, the study seeks to enhance interpretability and accuracy in identifying autism spectrum disorder (ASD) traits. The dataset includes diverse facial images of children diagnosed with ASD and neuro-typical children, ensuring comprehensive representation. Preprocessing techniques standardize and enhance image quality, mitigating biases. Integration of attention layers within the convolutional neural net-work (CNN) architecture focuses on crucial facial features, improving feature extraction and classification accuracy. This approach enhances model interpretability through eXplainable AI (XAI) techniques. Model training involves optimization and validation processes, employing hyper parameter tuning and cross-validation for robustness. The performance of this combined model yielded close to 95% accuracy outperforming existing models in terms of complexity to accuracy ratio.

[...] Read more.
Convolutional Neural Network-based Stacking Technique for Brain Tumor Classification using Red Panda Optimization

By Blessa Binolin Pepsi M. Anandhi H. Karunyaharini S. Visali N.

DOI: https://doi.org/10.5815/ijitcs.2025.05.05, Pub. Date: 8 Oct. 2025

In the healthcare field, the detection of critical diseases such as brain tumors is essential. A technique like traditional support vector machine has been commonly used for brain tumor classification. However, Processing and detecting brain tumors requires achieving high accuracy with shorter detection time and reduced complexity. To accomplish this, efficient feature selection is necessary, which can be based on various factors. A convolutional neural network-based stacking technique is introduced for effective brain tumor classification and prediction using Red Panda optimization. By efficiently extracting spatial data from medical images, a convolutional neural network is used in stacking to enhance thecapacity of our model for abnormality detection and classification in the prediction of brain tumors. Red panda optimization is a biologically inspired stochastic optimization algorithm used for the effective selection of significant features. This Technique improves the prediction accuracy in a shorter period and reduces the complexity by selecting significant features for a huge amount of data by employing effective optimization. This technique is tested on multiple standard datasets to assess our model’s performance. Our technique is compared to other optimization models such as Mutual information-based optimization and traditional particle swarm optimization for further validation. Our model showed an improvement in detection accuracy to 98% with a better reduction in detection time and complexity.

[...] Read more.
A Comparative Study of Statistical (SARIMA) Vis-À-Vis Some Traditional Machine-Learning and Deep-Learning Techniques to Forecast Malaria Incidences in Kolkata of India

By Krishnendra Sankar Ganguly Krishna Sankar Ganguly Ambar Dutta

DOI: https://doi.org/10.5815/ijitcs.2025.05.06, Pub. Date: 8 Oct. 2025

To augment the accuracy of the results of a Time-Series Forecasting problem in the Computational Epidemiology domain of Public Health, to generate an accurate alert in a Real-time Outbreak and Disease Surveillance (RODS) system, namely in the prediction of Malaria incidences, an interdisciplinary approach of data analysis [through Statistical along with Machine-Learning (ML) and Deep-Learning techniques (DL)] has been studied in this research. Two different Non-linear Deep-Learning based techniques, viz., Long Short-Term Memory (LSTM) [a subclass of Recurrent Neural Network (RNN)] & Gated Recurrent Unit (GRU) and two different Non-linear Machine-Learning techniques, viz., Random Forest Regressor & Non-linear Support Vector Machine Regressor are applied in this study to compare against the traditional Statistical-based linear SARIMA model, to forecast a longitudinal data-set of malaria incidences. While SARIMA or other traditional Autoregressive (AR) models, necessitating a smaller number of parameters, undergo limited training and limited prediction power, ML and DL models show profound and persistent performance improvement with better noise-handling/ missing values and perform multi-step forecasts. Moreover, the over-fitting issue can be combated by introducing densely connected residual links in the ML/ DL networks.

[...] Read more.
Data Optimization through Compression Methods Using Information Technology

By Igor V. Malyk Yevhen Kyrychenko Mykola Gorbatenko Taras Lukashiv

DOI: https://doi.org/10.5815/ijitcs.2025.05.07, Pub. Date: 8 Oct. 2025

Efficient comparison of heterogeneous tabular datasets is difficult when sources are unknown or weakly documented. We address this problem by introducing a unified, type-aware framework that builds compact data represen- tations (CDRs)—concise summaries sufficient for downstream analysis—and a corresponding similarity graph (and tree) over a data corpus. Our novelty is threefold: (i) a principled vocabulary and procedure for constructing CDRs per variable type (factor, time, numeric, string), (ii) a weighted, type-specific similarity metric we call Data Information Structural Similarity (DISS) that aggregates distances across heterogeneous summaries, and (iii) an end-to-end, cloud-scalable real- ization that supports large corpora. Methodologically, factor variables are summarized by frequency tables; time variables by fixed-bin histograms; numeric variables by moment vectors (up to the fourth order); and string variables by TF–IDF vectors. Pairwise similarities use Hellinger, Wasserstein (p=1), total variation, and L1/L2 distances, with MAE/MAPE for numeric summaries; the DISS score combines these via learned or user-set weights to form an adjacency graph whose minimum-spanning tree yields a similarity tree. In experiments on multi-source CSVs, the approach enables accurate retrieval of closest datasets and robust corpus-level structuring while reducing storage and I/O. This contributes a repro- ducible pathway from raw tables to a similarity tree, clarifying terminology and providing algorithms that practitioners can deploy at scale.

[...] Read more.
Design and Implementation of a Web-based Document Management System

By Samuel M. Alade

DOI: https://doi.org/10.5815/ijitcs.2023.02.04, Pub. Date: 8 Apr. 2023

One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.

[...] Read more.
Cardiotocography Data Analysis to Predict Fetal Health Risks with Tree-Based Ensemble Learning

By Pankaj Bhowmik Pulak Chandra Bhowmik U. A. Md. Ehsan Ali Md. Sohrawordi

DOI: https://doi.org/10.5815/ijitcs.2021.05.03, Pub. Date: 8 Oct. 2021

A sizeable number of women face difficulties during pregnancy, which eventually can lead the fetus towards serious health problems. However, early detection of these risks can save both the invaluable life of infants and mothers. Cardiotocography (CTG) data provides sophisticated information by monitoring the heart rate signal of the fetus, is used to predict the potential risks of fetal wellbeing and for making clinical conclusions. This paper proposed to analyze the antepartum CTG data (available on UCI Machine Learning Repository) and develop an efficient tree-based ensemble learning (EL) classifier model to predict fetal health status. In this study, EL considers the Stacking approach, and a concise overview of this approach is discussed and developed accordingly. The study also endeavors to apply distinct machine learning algorithmic techniques on the CTG dataset and determine their performances. The Stacking EL technique, in this paper, involves four tree-based machine learning algorithms, namely, Random Forest classifier, Decision Tree classifier, Extra Trees classifier, and Deep Forest classifier as base learners. The CTG dataset contains 21 features, but only 10 most important features are selected from the dataset with the Chi-square method for this experiment, and then the features are normalized with Min-Max scaling. Following that, Grid Search is applied for tuning the hyperparameters of the base algorithms. Subsequently, 10-folds cross validation is performed to select the meta learner of the EL classifier model. However, a comparative model assessment is made between the individual base learning algorithms and the EL classifier model; and the finding depicts EL classifiers’ superiority in fetal health risks prediction with securing the accuracy of about 96.05%. Eventually, this study concludes that the Stacking EL approach can be a substantial paradigm in machine learning studies to improve models’ accuracy and reduce the error rate.

[...] Read more.
Advanced Applications of Neural Networks and Artificial Intelligence: A Review

By Koushal Kumar Gour Sundar Mitra Thakur

DOI: https://doi.org/10.5815/ijitcs.2012.06.08, Pub. Date: 8 Jun. 2012

Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is considers as major soft-computing technology and have been extensively studied and applied during the last two decades. The most general applications where neural networks are most widely used for problem solving are in pattern recognition, data analysis, control and clustering. Artificial Neural Networks have abundant features including high processing speeds and the ability to learn the solution to a problem from a set of examples. The main aim of this paper is to explore the recent applications of Neural Networks and Artificial Intelligence and provides an overview of the field, where the AI & ANN’s are used and discusses the critical role of AI & NN played in different areas.

[...] Read more.
Performance of Machine Learning Algorithms with Different K Values in K-fold Cross-Validation

By Isaac Kofi Nti Owusu Nyarko-Boateng Justice Aning

DOI: https://doi.org/10.5815/ijitcs.2021.06.05, Pub. Date: 8 Dec. 2021

The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attempted to investigate the effect of diverse k values in training different machine learning models. This paper empirically analyses the prevalence and effect of distinct k values (3, 5, 7, 10, 15 and 20) on the validation performance of four well-known machine learning algorithms (Gradient Boosting Machine (GBM), Logistic Regression (LR), Decision Tree (DT) and K-Nearest Neighbours (KNN)). It was observed that the value of k and model validation performance differ from one machine-learning algorithm to another for the same classification task. However, our empirical suggest that k = 7 offers a slight increase in validations accuracy and area under the curve measure with lesser computational complexity than k = 10 across most MLA. We discuss in detail the study outcomes and outline some guidelines for beginners in the machine learning field in selecting the best k value and machine learning algorithm for a given task.

[...] Read more.
PDF Marksheet Generator

By Srushti Shimpi Sanket Mandare Tyagraj Sonawane Aman Trivedi K. T. V. Reddy

DOI: https://doi.org/10.5815/ijitcs.2014.11.05, Pub. Date: 8 Oct. 2014

The Marksheet Generator is flexible for generating progress mark sheet of students. This system is mainly based in the database technology and the credit based grading system (CBGS). The system is targeted to small enterprises, schools, colleges and universities. It can produce sophisticated ready-to-use mark sheet, which could be created and will be ready to print. The development of a marksheet and gadget sheet is focusing at describing tables with columns/rows and sub-column sub-rows, rules of data selection and summarizing for report, particular table or column/row, and formatting the report in destination document. The adjustable data interface will be popular data sources (SQL Server) and report destinations (PDF file). Marksheet generation system can be used in universities to automate the distribution of digitally verifiable mark-sheets of students. The system accesses the students’ exam information from the university database and generates the gadget-sheet Gadget sheet keeps the track of student information in properly listed manner. The project aims at developing a marksheet generation system which can be used in universities to automate the distribution of digitally verifiable student result mark sheets. The system accesses the students’ results information from the institute student database and generates the mark sheets in Portable Document Format which is tamper proof which provides the authenticity of the document. Authenticity of the document can also be verified easily.

[...] Read more.
Machine Learning based Wildfire Area Estimation Leveraging Weather Forecast Data

By Saket Sultania Rohit Sonawane Prashasti Kanikar

DOI: https://doi.org/10.5815/ijitcs.2025.01.01, Pub. Date: 8 Feb. 2025

Wildfires are increasingly destructive natural disasters, annually consuming millions of acres of forests and vegetation globally. The complex interactions among fuels, topography, and meteorological factors, including temperature, precipitation, humidity, and wind, govern wildfire ignition and spread. This research presents a framework that integrates satellite remote sensing and numerical weather prediction model data to refine estimations of final wildfire sizes. A key strength of our approach is the use of comprehensive geospatial datasets from the IBM PAIRS platform, which provides a robust foundation for our predictions. We implement machine learning techniques through the AutoGluon automated machine learning toolkit to determine the optimal model for burned area prediction. AutoGluon automates the process of feature engineering, model selection, and hyperparameter tuning, evaluating a diverse range of algorithms, including neural networks, gradient boosting, and ensemble methods, to identify the most effective predictor for wildfire area estimation. The system features an intuitive interface developed in Gradio, which allows the incorporation of key input parameters, such as vegetation indices and weather variables, to customize wildfire projections. Interactive Plotly visualizations categorize the predicted fire severity levels across regions. This study demonstrates the value of synergizing Earth observations from spaceborne instruments and forecast data from numerical models to strengthen real-time wildfire monitoring and postfire impact assessment capabilities for improved disaster management. We optimize an ensemble model by comparing various algorithms to minimize the root mean squared error between the predicted and actual burned areas, achieving improved predictive performance over any individual model. The final metric reveals that our optimized WeightedEnsemble model achieved a root mean squared error (RMSE) of 1.564 km2 on the test data, indicating an average deviation of approximately 1.2 km2 in the predictions.

[...] Read more.
Accident Response Time Enhancement Using Drones: A Case Study in Najm for Insurance Services

By Salma M. Elhag Ghadi H. Shaheen Fatmah H. Alahmadi

DOI: https://doi.org/10.5815/ijitcs.2023.06.01, Pub. Date: 8 Dec. 2023

One of the main reasons for mortality among people is traffic accidents. The percentage of traffic accidents in the world has increased to become the third in the expected causes of death in 2020. In Saudi Arabia, there are more than 460,000 car accidents every year. The number of car accidents in Saudi Arabia is rising, especially during busy periods such as Ramadan and the Hajj season. The Saudi Arabia’s government is making the required efforts to lower the nations of car accident rate. This paper suggests a business process improvement for car accident reports handled by Najm in accordance with the Saudi Vision 2030. According to drone success in many fields (e.g., entertainment, monitoring, and photography), the paper proposes using drones to respond to accident reports, which will help to expedite the process and minimize turnaround time. In addition, the drone provides quick accident response and recording scenes with accurate results. The Business Process Management (BPM) methodology is followed in this proposal. The model was validated by comparing before and after simulation results which shows a significant impact on performance about 40% regarding turnaround time. Therefore, using drones can enhance the process of accident response with Najm in Saudi Arabia.

[...] Read more.
A Systematic Review of Natural Language Processing in Healthcare

By Olaronke G. Iroju Janet O. Olaleke

DOI: https://doi.org/10.5815/ijitcs.2015.08.07, Pub. Date: 8 Jul. 2015

The healthcare system is a knowledge driven industry which consists of vast and growing volumes of narrative information obtained from discharge summaries/reports, physicians case notes, pathologists as well as radiologists reports. This information is usually stored in unstructured and non-standardized formats in electronic healthcare systems which make it difficult for the systems to understand the information contents of the narrative information. Thus, the access to valuable and meaningful healthcare information for decision making is a challenge. Nevertheless, Natural Language Processing (NLP) techniques have been used to structure narrative information in healthcare. Thus, NLP techniques have the capability to capture unstructured healthcare information, analyze its grammatical structure, determine the meaning of the information and translate the information so that it can be easily understood by the electronic healthcare systems. Consequently, NLP techniques reduce cost as well as improve the quality of healthcare. It is therefore against this background that this paper reviews the NLP techniques used in healthcare, their applications as well as their limitations.

[...] Read more.
Markov Models Applications in Natural Language Processing: A Survey

By Talal Almutiri Farrukh Nadeem

DOI: https://doi.org/10.5815/ijitcs.2022.02.01, Pub. Date: 8 Apr. 2022

Markov models are one of the widely used techniques in machine learning to process natural language. Markov Chains and Hidden Markov Models are stochastic techniques employed for modeling systems that are dynamic and where the future state relies on the current state.  The Markov chain, which generates a sequence of words to create a complete sentence, is frequently used in generating natural language. The hidden Markov model is employed in named-entity recognition and the tagging of parts of speech, which tries to predict hidden tags based on observed words. This paper reviews Markov models' use in three applications of natural language processing (NLP): natural language generation, named-entity recognition, and parts of speech tagging. Nowadays, researchers try to reduce dependence on lexicon or annotation tasks in NLP. In this paper, we have focused on Markov Models as a stochastic approach to process NLP. A literature review was conducted to summarize research attempts with focusing on methods/techniques that used Markov Models to process NLP, their advantages, and disadvantages. Most NLP research studies apply supervised models with the improvement of using Markov models to decrease the dependency on annotation tasks. Some others employed unsupervised solutions for reducing dependence on a lexicon or labeled datasets.

[...] Read more.
Cloud Computing: A review of the Concepts and Deployment Models

By Tinankoria Diaby Babak Bashari Rad

DOI: https://doi.org/10.5815/ijitcs.2017.06.07, Pub. Date: 8 Jun. 2017

This paper presents a selected short review on Cloud Computing by explaining its evolution, history, and definition of cloud computing. Cloud computing is not a brand-new technology, but today it is one of the most emerging technology due to its powerful and important force of change the manner data and services are managed. This paper does not only contain the evolution, history, and definition of cloud computing, but it also presents the characteristics, the service models, deployment models and roots of the cloud.

[...] Read more.
Design and Implementation of a Web-based Document Management System

By Samuel M. Alade

DOI: https://doi.org/10.5815/ijitcs.2023.02.04, Pub. Date: 8 Apr. 2023

One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.

[...] Read more.
Cardiotocography Data Analysis to Predict Fetal Health Risks with Tree-Based Ensemble Learning

By Pankaj Bhowmik Pulak Chandra Bhowmik U. A. Md. Ehsan Ali Md. Sohrawordi

DOI: https://doi.org/10.5815/ijitcs.2021.05.03, Pub. Date: 8 Oct. 2021

A sizeable number of women face difficulties during pregnancy, which eventually can lead the fetus towards serious health problems. However, early detection of these risks can save both the invaluable life of infants and mothers. Cardiotocography (CTG) data provides sophisticated information by monitoring the heart rate signal of the fetus, is used to predict the potential risks of fetal wellbeing and for making clinical conclusions. This paper proposed to analyze the antepartum CTG data (available on UCI Machine Learning Repository) and develop an efficient tree-based ensemble learning (EL) classifier model to predict fetal health status. In this study, EL considers the Stacking approach, and a concise overview of this approach is discussed and developed accordingly. The study also endeavors to apply distinct machine learning algorithmic techniques on the CTG dataset and determine their performances. The Stacking EL technique, in this paper, involves four tree-based machine learning algorithms, namely, Random Forest classifier, Decision Tree classifier, Extra Trees classifier, and Deep Forest classifier as base learners. The CTG dataset contains 21 features, but only 10 most important features are selected from the dataset with the Chi-square method for this experiment, and then the features are normalized with Min-Max scaling. Following that, Grid Search is applied for tuning the hyperparameters of the base algorithms. Subsequently, 10-folds cross validation is performed to select the meta learner of the EL classifier model. However, a comparative model assessment is made between the individual base learning algorithms and the EL classifier model; and the finding depicts EL classifiers’ superiority in fetal health risks prediction with securing the accuracy of about 96.05%. Eventually, this study concludes that the Stacking EL approach can be a substantial paradigm in machine learning studies to improve models’ accuracy and reduce the error rate.

[...] Read more.
Performance of Machine Learning Algorithms with Different K Values in K-fold Cross-Validation

By Isaac Kofi Nti Owusu Nyarko-Boateng Justice Aning

DOI: https://doi.org/10.5815/ijitcs.2021.06.05, Pub. Date: 8 Dec. 2021

The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attempted to investigate the effect of diverse k values in training different machine learning models. This paper empirically analyses the prevalence and effect of distinct k values (3, 5, 7, 10, 15 and 20) on the validation performance of four well-known machine learning algorithms (Gradient Boosting Machine (GBM), Logistic Regression (LR), Decision Tree (DT) and K-Nearest Neighbours (KNN)). It was observed that the value of k and model validation performance differ from one machine-learning algorithm to another for the same classification task. However, our empirical suggest that k = 7 offers a slight increase in validations accuracy and area under the curve measure with lesser computational complexity than k = 10 across most MLA. We discuss in detail the study outcomes and outline some guidelines for beginners in the machine learning field in selecting the best k value and machine learning algorithm for a given task.

[...] Read more.
Machine Learning based Wildfire Area Estimation Leveraging Weather Forecast Data

By Saket Sultania Rohit Sonawane Prashasti Kanikar

DOI: https://doi.org/10.5815/ijitcs.2025.01.01, Pub. Date: 8 Feb. 2025

Wildfires are increasingly destructive natural disasters, annually consuming millions of acres of forests and vegetation globally. The complex interactions among fuels, topography, and meteorological factors, including temperature, precipitation, humidity, and wind, govern wildfire ignition and spread. This research presents a framework that integrates satellite remote sensing and numerical weather prediction model data to refine estimations of final wildfire sizes. A key strength of our approach is the use of comprehensive geospatial datasets from the IBM PAIRS platform, which provides a robust foundation for our predictions. We implement machine learning techniques through the AutoGluon automated machine learning toolkit to determine the optimal model for burned area prediction. AutoGluon automates the process of feature engineering, model selection, and hyperparameter tuning, evaluating a diverse range of algorithms, including neural networks, gradient boosting, and ensemble methods, to identify the most effective predictor for wildfire area estimation. The system features an intuitive interface developed in Gradio, which allows the incorporation of key input parameters, such as vegetation indices and weather variables, to customize wildfire projections. Interactive Plotly visualizations categorize the predicted fire severity levels across regions. This study demonstrates the value of synergizing Earth observations from spaceborne instruments and forecast data from numerical models to strengthen real-time wildfire monitoring and postfire impact assessment capabilities for improved disaster management. We optimize an ensemble model by comparing various algorithms to minimize the root mean squared error between the predicted and actual burned areas, achieving improved predictive performance over any individual model. The final metric reveals that our optimized WeightedEnsemble model achieved a root mean squared error (RMSE) of 1.564 km2 on the test data, indicating an average deviation of approximately 1.2 km2 in the predictions.

[...] Read more.
Accident Response Time Enhancement Using Drones: A Case Study in Najm for Insurance Services

By Salma M. Elhag Ghadi H. Shaheen Fatmah H. Alahmadi

DOI: https://doi.org/10.5815/ijitcs.2023.06.01, Pub. Date: 8 Dec. 2023

One of the main reasons for mortality among people is traffic accidents. The percentage of traffic accidents in the world has increased to become the third in the expected causes of death in 2020. In Saudi Arabia, there are more than 460,000 car accidents every year. The number of car accidents in Saudi Arabia is rising, especially during busy periods such as Ramadan and the Hajj season. The Saudi Arabia’s government is making the required efforts to lower the nations of car accident rate. This paper suggests a business process improvement for car accident reports handled by Najm in accordance with the Saudi Vision 2030. According to drone success in many fields (e.g., entertainment, monitoring, and photography), the paper proposes using drones to respond to accident reports, which will help to expedite the process and minimize turnaround time. In addition, the drone provides quick accident response and recording scenes with accurate results. The Business Process Management (BPM) methodology is followed in this proposal. The model was validated by comparing before and after simulation results which shows a significant impact on performance about 40% regarding turnaround time. Therefore, using drones can enhance the process of accident response with Najm in Saudi Arabia.

[...] Read more.
Multi-Factor Authentication for Improved Enterprise Resource Planning Systems Security

By Carolyne Kimani James I. Obuhuma Emily Roche

DOI: https://doi.org/10.5815/ijitcs.2023.03.04, Pub. Date: 8 Jun. 2023

Universities across the globe have increasingly adopted Enterprise Resource Planning (ERP) systems, a software that provides integrated management of processes and transactions in real-time. These systems contain lots of information hence require secure authentication. Authentication in this case refers to the process of verifying an entity’s or device’s identity, to allow them access to specific resources upon request. However, there have been security and privacy concerns around ERP systems, where only the traditional authentication method of a username and password is commonly used. A password-based authentication approach has weaknesses that can be easily compromised. Cyber-attacks to access these ERP systems have become common to institutions of higher learning and cannot be underestimated as they evolve with emerging technologies. Some universities worldwide have been victims of cyber-attacks which targeted authentication vulnerabilities resulting in damages to the institutions reputations and credibilities. Thus, this research aimed at establishing authentication methods used for ERPs in Kenyan universities, their vulnerabilities, and proposing a solution to improve on ERP system authentication. The study aimed at developing and validating a multi-factor authentication prototype to improve ERP systems security. Multi-factor authentication which combines several authentication factors such as: something the user has, knows, or is, is a new state-of-the-art technology that is being adopted to strengthen systems’ authentication security. This research used an exploratory sequential design that involved a survey of chartered Kenyan Universities, where questionnaires were used to collect data that was later analyzed using descriptive and inferential statistics. Stratified, random and purposive sampling techniques were used to establish the sample size and the target group. The dependent variable for the study was limited to security rating with respect to realization of confidentiality, integrity, availability, and usability while the independent variables were limited to adequacy of security, authentication mechanisms, infrastructure, information security policies, vulnerabilities, and user training. Correlation and regression analysis established vulnerabilities, information security policies, and user training to be having a higher impact on system security. The three variables hence acted as the basis for the proposed multi-factor authentication framework for improve ERP systems security.

[...] Read more.
Advanced Applications of Neural Networks and Artificial Intelligence: A Review

By Koushal Kumar Gour Sundar Mitra Thakur

DOI: https://doi.org/10.5815/ijitcs.2012.06.08, Pub. Date: 8 Jun. 2012

Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is considers as major soft-computing technology and have been extensively studied and applied during the last two decades. The most general applications where neural networks are most widely used for problem solving are in pattern recognition, data analysis, control and clustering. Artificial Neural Networks have abundant features including high processing speeds and the ability to learn the solution to a problem from a set of examples. The main aim of this paper is to explore the recent applications of Neural Networks and Artificial Intelligence and provides an overview of the field, where the AI & ANN’s are used and discusses the critical role of AI & NN played in different areas.

[...] Read more.
Detecting and Preventing Common Web Application Vulnerabilities: A Comprehensive Approach

By Najla Odeh Sherin Hijazi

DOI: https://doi.org/10.5815/ijitcs.2023.03.03, Pub. Date: 8 Jun. 2023

Web applications are becoming very important in our lives as many sensitive processes depend on them. Therefore, it is critical for safety and invulnerability against malicious attacks. Most studies focus on ways to detect these attacks individually. In this study, we develop a new vulnerability system to detect and prevent vulnerabilities in web applications. It has multiple functions to deal with some recurring vulnerabilities. The proposed system provided the detection and prevention of four types of vulnerabilities, including SQL injection, cross-site scripting attacks, remote code execution, and fingerprinting of backend technologies. We investigated the way worked for every type of vulnerability; then the process of detecting each type of vulnerability; finally, we provided prevention for each type of vulnerability. Which achieved three goals: reduce testing costs, increase efficiency, and safety. The proposed system has been validated through a practical application on a website, and experimental results demonstrate its effectiveness in detecting and preventing security threats. Our study contributes to the field of security by presenting an innovative approach to addressing security concerns, and our results highlight the importance of implementing advanced detection and prevention methods to protect against potential cyberattacks. The significance and research value of this survey lies in its potential to enhance the security of online systems and reduce the risk of data breaches.

[...] Read more.
A Systematic Literature Review of Studies Comparing Process Mining Tools

By Cuma Ali Kesici Necmettin Ozkan Sedat Taskesenlioglu Tugba Gurgen Erdogan

DOI: https://doi.org/10.5815/ijitcs.2022.05.01, Pub. Date: 8 Oct. 2022

Process Mining (PM) and PM tool abilities play a significant role in meeting the needs of organizations in terms of getting benefits from their processes and event data, especially in this digital era. The success of PM initiatives in producing effective and efficient outputs and outcomes that organizations desire is largely dependent on the capabilities of the PM tools. This importance of the tools makes the selection of them for a specific context critical. In the selection process of appropriate tools, a comparison of them can lead organizations to an effective result. In order to meet this need and to give insight to both practitioners and researchers, in our study, we systematically reviewed the literature and elicited the papers that compare PM tools, yielding comprehensive results through a comparison of available PM tools. It specifically delivers tools’ comparison frequency, methods and criteria used to compare them, strengths and weaknesses of the compared tools for the selection of appropriate PM tools, and findings related to the identified papers' trends and demographics. Although some articles conduct a comparison for the PM tools, there is a lack of literature reviews on the studies that compare PM tools in the market. As far as we know, this paper presents the first example of a review in literature in this regard.

[...] Read more.
Early Formalization of AI-tools Usage in Software Engineering in Europe: Study of 2023

By Denis S. Pashchenko

DOI: https://doi.org/10.5815/ijitcs.2023.06.03, Pub. Date: 8 Dec. 2023

This scientific article presents the results of a study focused on the current practices and future prospects of AI-tools usage, specifically large language models (LLMs), in software development (SD) processes within European IT companies. The Pan-European study covers 35 SD teams from all regions of Europe and consists of three sections: the first section explores the current adoption of AI-tools in software production, the second section addresses common challenges in LLMs implementation, and the third section provides a forecast of the tech future in AI-tools development for SD.
The study reveals that AI-tools, particularly LLMs, have gained popularity and approbation in European IT companies for tasks related to software design and construction, coding, and software documentation. However, their usage for business and system analysis remains limited. Nevertheless, challenges such as resource constraints and organizational resistance are evident.
The article also highlights the potential of AI-tools in the software development process, such as automating routine operations, speeding up work processes, and enhancing software product excellence. Moreover, the research examines the transformation of IT paradigms driven by AI-tools, leading to changes in the skill sets of software developers. Although the impact of LLMs on the software development industry is perceived as modest, experts anticipate significant changes in the next 10 years, including AI-tools integration into advanced IDEs, software project management systems, and product management tools.
Ethical concerns about data ownership, information security and legal aspects of AI-tools usage are also discussed, with experts emphasizing the need for legal formalization and regulation in the AI domain. Overall, the study highlights the growing importance and potential of AI-tools in software development, as well as the need for careful consideration of challenges and ethical implications to fully leverage their benefits.

[...] Read more.