IJITCS Vol. 16, No. 6, Dec. 2024
Cover page and Table of Contents: PDF (size: 244KB)
REGULAR PAPERS
The analysis of modern methodological systems of teaching mathematics shows that the use of interactive visual models can positively influence the result of mastering mathematical knowledge and the level of students' research skills. The development of IT has made the emergence of many specialized environments and services focused on solving mathematical problems possible. Among such software tools, a separate group of programs should be highlighted that allow interactive user interaction with geometric objects. These computer programs enable the student to independently "discover" geometric facts, which gives reason to consider such programs as a tool for developing their research skills. The study aims to substantiate the positive impact of visual models (models created in interactive mathematical environments) on developing students' research skills and general mastery of the school geometry course. We have presented a methodological scheme for developing students' research skills using GeoGebra (Technique), conducted its expert evaluation, and experimentally tested its effectiveness. The experts noted the potential efficacy of the Technique in terms of the quality of students' geometric knowledge (91.7%) and improving their performance in geometry in general (79.2%). The statistical analysis of the results of the pedagogical experiment confirmed that the student's research skills increased along with the semester grades in learning geometry. The results of the pedagogical experiment showed the effectiveness of the Technique we developed and provided grounds for recommending it for implementation.
[...] Read more.Drug Recommender Systems (DRS) streamline prescription process and contribute to better healthcare. Hence, this study developed a DRS that recommends appropriate drug(s) for the treatment of an ailment using Peptic Ulcer Disease (PUD) as a case study. Patients’ and drug data were elicited from MIMIC-IV and Drugs.com, respectively. These data were analysed and used in the design of the DRS model, which was based on the hybrid recommendation approach (combining the clustering algorithm, the Collaborative Filtering approach (CF), and the Knowledge-Based Filtering approach (KBF)). The factors that were considered in recommending appropriate drugs were age, gender, body weight, allergies, and drug interactions. The model designed was implemented in Python programming language with the Flask framework for web development and Visual Studio Code as the Integrated Development Environment. The performance of the system was evaluated using Precision, Recall, Accuracy, Root Mean Squared Error (RMSE) and usability test. The evaluation was carried out in two phases. Firstly, the CF component was evaluated by splitting the dataset from MIMIV-IV into a 70% (60,018) training set and a 30% (25,722) test set. This resulted in a precision score of 85.48%, a recall score of 85.58%, and a RMSE score of 0.74. Secondly, the KBF component was evaluated using 30 different cases. The evaluation for this was computed manually by comparing the recommendation results from the system with those of an expert. This resulted in a precision of 77%, a recall of 83%, an accuracy of 81% and an RMSE of 0.24. The results from the usability test showed a high percentage of performance of the system. The addition of the KBF reduced the error rate between actual recommendations and predicted recommendations. So, the system had a high ability to recommend appropriate drug(s) for PUD.
[...] Read more.The most popular way for people to share information is through social media. Several studies have been conducted using ML approaches like LSTM, SVM, BERT, GA, hybrid LSTM-SVM and Multi-View Attention Networks to recognize bogus news MVAN. Most traditional systems identify false news or true news exclusively, but discovering kind of false information and prioritizing false information is more difficult, and traditional algorithms offer poor textual classification accuracy. As a result, this study focuses on predicting COVID-19-related false information on Twitter along with prioritizing types of false information. The proposed lightweight recommendation-system consists of three phases such as preprocessing, feature extraction and classification. The preprocessing phase is performed to remove the unwanted data. After preprocessing, the BERT model is used to convert the word into binary vectors. Then these binary features are taken as the input of the classification phase. In this classification phase, a 4CL time distributed layer is introduced for effective feature selection to remove the detection burdens, and the Bi-GRU model is used in the classification phase. Proposed-method is implemented in Mat lab software and is carried out several performance-metrics, and there are three different datasets used for validating its performance. Proposed model's total accuracy is 97%, specificity is 98%, precision is 95%, and the error value is 0.02, demonstrating its effectiveness over current methods. The proposed social media research system can accurately predict false information, and recognized news may be offered to the user such that they can learn the truth about news on social media.
[...] Read more.The intricate realm of time series prediction using stock market datasets from the NSE India is delved into by this research. The supremacy of LSTM architecture for forecasting in time series is initially affirmed, only for a paradigm shift to be encountered when exploring various LSTM variants across distinct sectors on the NSE (National Stock Exchange) of India. Prices of various stocks in five different sectors have been predicted using multiple LSTM model variants. Contrary to the assumption that a specific variant would excel in a particular sector, the Gated Recurrent Unit (GRU) emerged as the top performer, prompting a closer examination of its limitations and subsequent enhancement using technical indicators. The ultimate objective is to unveil the most effective model for predicting stock prices in the dynamic landscape of NSE India.
[...] Read more.Diabetic retinopathy stands as a significant concern for individuals managing diabetes. It is a severe eye condition that targets the delicate blood vessels within the retina. As it advances, it can inflict severe vision impairment or complete blindness in extreme cases. Regular eye examinations are vital for individuals with diabetes to detect abnormalities early. Detection of diabetic retinopathy is challenging and a time-consuming process, but deep learning and transfer learning techniques offer vital support by automating the process, providing accurate predictions, and simplifying diagnostic procedures for healthcare professionals. This study introduces a multi-classification framework for grading diabetic retinopathy into five classes using Transfer Learning and data fusion. The objective is to develop a robust, automated model for diabetic retinopathy detection to enhance the diagnostic process for healthcare professionals. We fused two distinct datasets, APTOS and IDRiD, which resulted in a total of 4178 fundus images. The merged dataset underwent preprocessing to enhance image quality and to remove unwanted regions, noise and artifacts from the fundus images. The pre-processed dataset is then resized and a balancing technique called SMOTE is applied to it due to uneven class distribution present among classes. To increase diversity and size of the dataset, data augmentation techniques including flipping, brightness adjustment and contrast adjustment are applied. The dataset is split into 80:10:10 ratios for training, validation, and testing. Two pre-trained models, EfficientNetB5 and DenseNet121, are fine-tuned and training parameters like batch size, number of epochs, learning rate etc. are adjusted. The results demonstrate the highest test accuracy of 96.06% is achieved by using EfficientNetB5 model followed by 91.40% test accuracy using DenseNet121 model. The performance of our best model i.e. EfficientNetB5, is compared with several state-of-the-art approaches, including DenseNet-169, Hybrid models and ResNet-50 where our model outperformed these methodologies in terms of test accuracy.
[...] Read more.There are remarkable improvements in the healthcare sector particularly in patient care, maintaining and protecting the data, and saving administrative and operating costs, etc. Among the various functions in the healthcare sector, disease diagnosis is considered as the foremost function because it saves a life at the correct time. Early detection of diseases helps in disease prevention, letting the patients get vigorous and effective treatment and saving their lives. Several techniques were suggested by the researchers for disease prediction. Many literatures have been witnessed on disease prediction. This article reviews several articles systematically and compares various machine learning (ML) algorithms for disease prediction, including the Random Forest (RF), Naive Bayes (NB), Decision Tree (DT), Support Vector Machine (SVM), and Logistic Regression (LR) algorithms. A thorough analysis is presented based on the number of publications year-wise, disease-wise, and also based on the performance metrics. This review thoroughly analyzes and compares various ML techniques applied in disease prediction, focusing on classification algorithms commonly employed in healthcare applications. From the systematic review, a multi objective optimization method named Grey Relational Analysis (GRA) is used to rank the ML algorithms using their performance metrics. The results of this paper help the researchers to have an insight into the disease prediction domain. Also, the performance of various ML algorithms aids the researchers to choose a better methodology to predict a disease.
[...] Read more.The rapid spread of misinformation on social media platforms, especially Twitter, presents a challenge in the digital age. Traditional fact-checking struggles with the volume and speed of misinformation, while existing detection systems often focus solely on linguistic features, ignoring factors like source credibility, user interactions, and context. Current automated systems also lack the accuracy to differentiate between genuine and fake news, resulting in high rates of false positives and negatives. This study investigates the creation of a Twitter bot for detecting fake news using deep learning methodologies. The research assessed the performance of BERT, CNN, and Bi-LSTM models, along with an ensemble model combining their strengths. The TruthSeeker dataset was used for training and testing. The ensemble model leverages BERT's contextual understanding, CNN’s feature extraction, and Bi-LSTM’s sequence learning to improve detection accuracy. The Twitter bot integrates this ensemble model via the Twitter API for real-time detection of fake news. Results show the ensemble model significantly outperformed individual models and existing systems, achieving an accuracy of 98.24%, recall of 98.14%, precision of 98.42%, and an F1-score of 98.24%. These findings highlight that combining multiple models can offer an effective solution for real-time detection of misinformation, contributing to efforts to combat fake news on social media platforms.
[...] Read more.