ISSN: 2075-0161 (Print)
ISSN: 2075-017X (Online)
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 127
ICV: 2014 8.09
SJR: 2021 0.37
IJMECS is committed to bridge the theory and practice of modern education and computer science. From innovative ideas to specific algorithms and full system implementations, IJMECS publishes original, peer-reviewed, and high quality articles in the areas of modern education and computer science. IJMECS is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of computer science, modern education and applications.
IJMECS has been abstracted or indexed by several world class databases: Scopus, SCImago, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
Predicting College placements based on academic performance is critical to supporting educational institutions and students in making informed decisions about future career paths. The present research investigates the use of Machine Learning (ML) algorithms to predict college students' placements using academic performance data. The study makes use of a dataset that includes a variety of academic markers, such as grades, test scores, and extracurricular activities, obtained from a varied sample of college students. To create predictive models, the study analyses numerous ML algorithms, including Logistic Regression, Gaussian Naive Bayes, Random Forest, Support Vector Machine, and K-Nearest Neighbour. The predictive models are evaluated using performance criteria such as accuracy, precision, recall, and F1-score. The most effective machine learning method for forecasting students' placements based on academic achievement is identified through a comparative study. The findings show that Random Forest approaches have the potential to effectively forecast college student placements. The findings show that academic factors such as grades and test scores have a considerable impact on prediction accuracy. The findings of this study could be beneficial to educational institutions, students, and career counsellors.[...] Read more.
Large Language Models (LLMs) have received significant attention due to their potential to transform the field of education and assessment through the provision of automated responses to a diverse range of inquiries. The objective of this research is to examine the efficacy of three LLMs - ChatGPT, BingChat, and Bard - in relation to their performance on the Vietnamese High School Biology Examination dataset. This dataset consists of a wide range of biology questions that vary in difficulty and context. By conducting a thorough analysis, we are able to reveal the merits and drawbacks of each LLM, thereby providing valuable insights for their successful incorporation into educational platforms. This study examines the proficiency of LLMs in various levels of questioning, namely Knowledge, Comprehension, Application, and High Application. The findings of the study reveal complex and subtle patterns in performance. The versatility of ChatGPT is evident as it showcases potential across multiple levels. Nevertheless, it encounters difficulties in maintaining consistency and effectively addressing complex application queries. BingChat and Bard demonstrate strong performance in tasks related to factual recall, comprehension, and interpretation, indicating their effectiveness in facilitating fundamental learning. Additional investigation encompasses educational environments. The analysis indicates that the utilization of BingChat and Bard has the potential to augment factual and comprehension learning experiences. However, it is crucial to acknowledge the indispensable significance of human expertise in tackling complex application inquiries. The research conducted emphasizes the importance of adopting a well-rounded approach to the integration of LLMs, taking into account their capabilities while also recognizing their limitations. The refinement of LLM capabilities and the resolution of challenges in addressing advanced application scenarios can be achieved through collaboration among educators, developers, and AI researchers.[...] Read more.
The software for clustering students according to their educational achievements using fuzzy logic was developed in Python using the Google Colab cloud service. In the process of analyzing educational data, the problems of Data Mining are solved, since only some characteristics of the educational process are obtained from a large sample of data. Data clustering was performed using the classic K-Means method, which is characterized by simplicity and high speed. Cluster analysis was performed in the space of two features using the machine learning library scikit-learn (Python). The obtained clusters are described by fuzzy triangular membership functions, which allowed to correctly determine the membership of each student to a certain cluster. Creation of fuzzy membership functions is done using the scikit-fuzzy library. The development of fuzzy functions of objects belonging to clusters is also useful for educational purposes, as it allows a better understanding of the principles of using fuzzy logic. As a result of processing test educational data using the developed software, correct results were obtained. It is shown that the use of fuzzy membership functions makes it possible to correctly determine the belonging of students to certain clusters, even if such clusters are not clearly separated. Due to this, it is possible to more accurately determine the recommended level of difficulty of tasks for each student, depending on his previous evaluations.[...] Read more.
In the recent era, there has been a significant surge in the demand for cloud computing due to its versatile applications in real-time situations. Cloud computing efficiently tackles extensive computing challenges, providing a cost-effective and energy-efficient solution for cloud service providers (CSPs). However, the surge in task requests has led to an overload on cloud servers, resulting in performance degradation. To address this problem, load balancing has emerged as a favorable approach, wherein incoming tasks are allocated to the most appropriate virtual machine (VM) according to their specific needs. However, finding the optimal VM poses a challenge as it is considered a difficult problem known as NP-hard. To address this challenge, current research has widely adopted meta-heuristic approaches for solving NP-hard problems. This research introduces a novel hybrid optimization approach, integrating the particle swarm optimization algorithm (PSO) to handle optimization, the gravitational search algorithm (GSA) to improve the search process, and leveraging fuzzy logic to create an effective rule for selecting virtual machines (VMs) efficiently. The integration of PSO and GSA results in a streamlined process for updating particle velocity and position, while the utilization of fuzzy logic assists in discerning the optimal solution for individual tasks. We assess the efficacy of our suggested method by gauging its performance through various metrics, including throughput, makespan, and execution time. In terms of performance, the suggested method demonstrates commendable performance, with average load, turnaround time, and response time measuring at 0.168, 18.20 milliseconds, and 11.26 milliseconds, respectively. Furthermore, the proposed method achieves an average makespan of 92.5 milliseconds and average throughput performance of 85.75. The performance of the intended method is improved by 90.5%, 64.9%, 36.11%, 24.72%, 18.27%, 11.36%, and 5.21 in comparison to the existing techniques. The results demonstrate the efficacy of this approach through significant improvements in execution time, CPU utilization, makespan, and throughput, providing a valuable contribution to the field of cloud computing load balancing.[...] Read more.
Text summarization is the process of creating a shorter version of a longer text document while retaining its most important information. There have been a number of methods proposed for text summarization, but the existing method does not provide better results and has a problem with sequence classification. To overcome these limitations, a tangent search long short term memory with adaptive reinforcement transient learning-based extractive and abstractive document summarization is proposed in this manuscript. In abstractive phase, the features of the extractive summary are extracted and then the optimal features are selected by Adaptive Flamingo Optimization (AFO). With these optimal features, the abstractive summary is generated. The proposed method is implemented in python. For extractive text summarization, the proposed method attains 42.11% ROUGE-1 Score, 23.55% ROUGE-2 score and 41.05% ROUGE-L score using Gigaword. Additionally, 57.13% ROUGE-1 Score, 28.35% ROUGE-2 score and 52.85% ROUGE-L score using DUC-2004 dataset. For abstractive text summarization the proposed method attains 47.05% ROUGE-1 Score, 22.02% ROUGE-2 score and 48.96% ROUGE-L score using Gigaword. Also, 35.13% ROUGE-1 Score, 20.35% ROUGE-2 score and 35.25% ROUGE-L score using DUC-2004 dataset.[...] Read more.
Autism Spectrum Disorder (ASD) is a neurodevelopmental syndrome which cannot be curable but can be predicted in early stage. Early prediction and cure may help to diagnose the autism. In existing methods, prediction of best feature is not identified for detecting the autism in early stage. In this proposed research, prediction of ASD has been done by identifying the best feature transformation technique with best ML classifier and finding out the most significant feature for diagnosis of autism in early age. Early-detected ASD datasets pertaining to toddler and child are collected and applied few Feature transformation techniques, comprising log, power-box-cox and yeo-Johnson transformations to these datasets. Then, using these ASD datasets, several classification approaches were applied, and their efficiency was evaluated. Adaboost given 100% accuracy for toddler dataset and whereas, Random forest showed 98.3% accuracy for child datasets. The feature transformations ensuing the best prediction was Log, Power- Box cox and Yeo-Johnson Transformation for toddler and Log transformation for children datasets. After these exploration, various feature selection techniques like univariate (UNI) and recursive feature elimination (RFE) are applied to these transformed datasets to recognize the most significant ASD risk feature to predict the autism in early stage for toddler and child data. It is found that A5 feature is most significant feature for toddler, A4 stands most significant feature for child based on univariate and RFE. This benefits the doctor to provide the suitable diagnosis in their early stage of life. The results of these logical methodologies show that ML methods can yield precise predictions of ASD when they are accurately optimised. This shows that using these models for early ASD detection may be feasible.[...] Read more.