Wireless Mesh Networks (WMNs) have gained prominence in modern communication technology due to their flexibility and ease of deployment, which are advantageous in scenarios like disaster management and rescue operations. However, existing methods for enhancing the performance of WMNs, such as increasing the number of gateways, are costly, introduce interference, and complicate deployment. Moreover, current routing protocols often suffer from suboptimal packet delivery due to inadequate traffic flow management and packet loss. This research addresses these gaps by proposing a novel optimization model that integrates Artificial Bee Colony (ABC) and Particle Swarm Optimization (PSO) techniques to enhance packet delivery ratio in WMNs using Voice over Internet Protocol (VoIP). Unlike traditional approaches that overlook efficient traffic management, our proposed model focuses on optimizing packet transmission by selecting efficient routes and minimizing packet loss. The novelty of this solution lies in its hybrid use of ABC and PSO for dynamic node and route selection, which significantly improves network performance, reduces control overhead, and minimizes packet loss. Experimental results demonstrate that the proposed model outperforms existing protocols, making it a promising approach for enhancing network reliability and efficiency in WMNs.
[...] Read more.Over the years, learning has shifted from a conventional classroom environment to a digital space due to an increased interest in e-learning and swift innovations in information technology. This has brought the attention of many individuals and institutions to delve into building various approaches for adaptive e-learning technologies. Most existing e-learning systems are teacher-based, time-wasting and do not monitor learners’ progress levels. This paper presents a type-1 rule-based fuzzy logic model to implement an adaptive e-learning system by identifying students’ prior knowledge, learning style, and learning pace. The system was designed with Object-Oriented Analysis and Design Methodology and implemented using PHP, JavaScript, and MySQL technologies. A total of 31 first-year students of the University of Nigeria, Nsukka, participated in the evaluation of the software. The pre-test measured the students' prior knowledge, and the performance of each student was mapped. The system monitors students’ engagement levels and performance to improve learning outcomes. It also has an ‘Ask Teacher’ feature, which allows a student to ask the teacher questions outside the forum and the student’s feedback form. Each chapter has a pre-test to test the student’s existing knowledge, well-explained chapter content in text and audio-visual format, and a post-test to test their performance at the end of each chapter. After participating in the experiment, a questionnaire was used to collect the general students’ views on online-adaptive learning. The study implies that it assists students, teachers, and universities to have seamless learning and offers a quick feedback mechanism for the university’s decision-making.
[...] Read more.Laser micromachining has become an essential tool in precision manufacturing due to its non-contact nature, high spatial resolution, and capability to produce intricate micro-features. However, identifying the optimal combination of process parameters remains challenging because of the nonlinear and interdependent effects of laser power, scanning speed, and pulse frequency on cut quality. In this study, a comparative framework is presented that benchmarks the Taguchi Design of Experiments (DoE) against a Deep Neural Network (DNN) model to predict and optimize the micromachining performance of stainless steel. A unified Cut Quality Index (CQI) was developed by combining three critical responses kerf width, heat-affected zone (HAZ), and edge chipping into a single measure of overall cut integrity. A physics-consistent dataset of 75 samples, comprising 20 literature-based and 55 synthetically generated data points, was constructed to ensure both experimental realism and statistical diversity. The Taguchi analysis using an L18 orthogonal array identified the optimal parameters as 80 W laser power, 250 mm/s scanning speed, and 60 kHz pulse frequency, corresponding to the highest signal-to-noise ratio and thermally balanced operation. The DNN model achieved strong predictive accuracy (R² ≈ 0.92–0.94), effectively capturing nonlinear parameter interactions without overfitting. The results demonstrate that while the Taguchi method efficiently identifies robust process windows with minimal experimentation, the DNN extends predictive capability across continuous, untested regions of the process space. Collectively, these findings establish a physics-informed, data-driven comparative framework for intelligent optimization of laser micromachining, with direct relevance to aerospace, biomedical, and precision micro-engineering applications.
[...] Read more.Semantic understanding of camera-captured scene text images is an important problem in computer vision. Scene character recognition is the pivotal task in this problem, and deep learning is now-a-days the most prospective approach. However, limited sample-size of scene character datasets appear to be a major hindrance for training deep networks. In this paper, we present (i) various augmentation techniques for increasing the sample size of such datasets along with associated insights, (ii) an extended version of the popular Chars74k dataset (herein referred to as E-Chars74k), and (iii) the benchmark performance on the developed E-Chars74k dataset. Experiments on various sets of data such as digits, alphabets and their combination, belonging to the usual as well as wild scenarios, clearly reflect significant performance gain (20%-30% increase in scene character recognition accuracy). It is noteworthy to mention that in all these experiments, a deep convolutional neural network powered with two conv-pool pairs is trained with the uniform training test partition to foster comparison on equal bench.
[...] Read more.This study investigates the application of machine learning methods for the classification of fraudulent job postings in e-business platforms. Using the publicly available fake_job_postings.csv dataset, textual and categorical features of vacancies were processed and vectorised through TF-IDF, HashingVectorizer, and optimised TF-IDF. Eight machine learning algorithms were compared, including Logistic Regression, Random Forest, Gradient Boosting, Decision Tree, Multinomial Naive Bayes, Linear SVC, K-Nearest Neighbours, and XGBoost. The experiments demonstrate that XGBoost achieved the best performance (Accuracy = 0.990, Precision = 0.982, Recall = 0.998, F1 = 0.990) across all feature representations. Its superior results can be attributed to the ability of boosted ensembles to capture complex non-linear relationships in high-dimensional feature spaces while maintaining robustness against noise and class imbalance.
However, it should be noted that the evaluation was performed on a single static dataset. While the high recall shows the model’s ability to reliably detect fraudulent ads in this context, questions remain about its generalisability. Fraud tactics evolve rapidly, and new job scams may significantly differ from patterns in the training data. This creates a potential risk of overfitting to dataset-specific features, which limits direct transfer to real-world scenarios without continuous retraining and monitoring. The practical contribution of the study is a reproducible framework that integrates text and categorical processing, vectorisation, hyperparameter optimisation, and comparative model benchmarking. Such a framework could be embedded into online job platforms to support automated filtering of suspicious ads. Still, its deployment requires additional measures: periodic retraining with updated data, integration with platform APIs, and the inclusion of explainability modules to ensure transparency and user trust. Overall, the research demonstrates that ensemble-based models, particularly XGBoost, offer strong potential for fraud detection in the e-business labour market. At the same time, further work is necessary to validate model robustness on unseen and evolving fraudulent job posting strategies, ensuring scalability and reliability in production environments.
To augment the accuracy of the results of a Time-Series Forecasting problem in the Computational Epidemiology domain of Public Health, to generate an accurate alert in a Real-time Outbreak and Disease Surveillance (RODS) system, namely in the prediction of Malaria incidences, an interdisciplinary approach of data analysis [through Statistical along with Machine-Learning (ML) and Deep-Learning techniques (DL)] has been studied in this research. Two different Non-linear Deep-Learning based techniques, viz., Long Short-Term Memory (LSTM) [a subclass of Recurrent Neural Network (RNN)] & Gated Recurrent Unit (GRU) and two different Non-linear Machine-Learning techniques, viz., Random Forest Regressor & Non-linear Support Vector Machine Regressor are applied in this study to compare against the traditional Statistical-based linear SARIMA model, to forecast a longitudinal data-set of malaria incidences. While SARIMA or other traditional Autoregressive (AR) models, necessitating a smaller number of parameters, undergo limited training and limited prediction power, ML and DL models show profound and persistent performance improvement with better noise-handling/ missing values and perform multi-step forecasts. Moreover, the over-fitting issue can be combated by introducing densely connected residual links in the ML/ DL networks.
[...] Read more.Predicting attitudes towards people with tuberculosis is a solution for preserving public health and a means of strengthening social ties to improve resilience to health threats. The assessment of attitudes towards the sick in general is essential to understand the educational level of a given population and to measure its resilience in contributing to the health of all within the framework of community life. The case of tuberculosis is chosen in this study to highlight the need for a change in attitudes, particularly due to the preponderance of this disease in Africa. While it is clear that attitudes influence the organization of individuals and community life, it remains a challenge to put in place an effective mechanism for evaluating the metrics that contribute to determining the attitude towards people with tuberculosis. Knowledge of attitudes towards any disease is very important to understanding collective values on this disease, hence the need to predict attitudes in the case of tuberculosis in favor of health education for all social strata while targeting areas of training not yet explored or requiring capacity building among populations. Changing attitudes towards tuberculosis patients will contribute to preserving public health and will help reduce stigma, improve understanding of the disease and encourage supportive and preventive behaviors. Achieving these changes involves dismantling stereotypes, improving access to care, mobilizing the media and social networks, including people with TB in society and strengthening the commitment of public authorities. The approach adopted consists of assessing the state of attitude towards tuberculosis patients at a given time and in a specific space based on the characteristics of the different social strata living there. An analysis of several metrics provided by machine learning algorithms makes it possible to identify differences in attitudes and serve as a decision-making aid on the strategies to be implemented. This work also relies on the investigation and analysis of historical trends using machine learning algorithms to understand population attitudes towards tuberculosis patients.
[...] Read more.Pseudo Random Number Generators (PRNGs) are deterministic and periodic in nature. Hybrid Pseudo Random Number Generators (HPRNGs) address some limitations by using time-based seeding with a modified Linear Congruential Generator (LCG). While HPRNGs improve upon the deterministic nature by using dynamic time-based seeds, they still suffer from periodicity and potential seed-related issues. This study addresses the deterministic nature further as well as the periodicity of PRNGs by proposing an enhanced HPRNG, making it more suitable for high-security applications.
[...] Read more.The integration of artificial intelligence (AI) in education is a promising transformation. Drawing on advanced technologies, AI enriches the learning experience through intelligent systems capable of analyzing, adapting and personalizing teaching. Despite a growing volume of scientific publications, there remains a lack of critical synthesis on the real impact of AI on the role of teachers, student learning and the transmission of knowledge. To fill this gap, this article proposes a systematic literature review, conducted using the PRISMA method, to identify the opportunities and limitations of AI in educational environments. From 1,248 publications extracted from the Scopus database between 2018 and 2024, 20 relevant studies were selected and analyzed after applying inclusion and exclusion criteria. The results show significant growth in research in this field, and demonstrate that AI enables teachers to automate certain tasks, personalize teaching and better meet learners' individual needs. However, significant obstacles remain, including lack of digital skills, resistance to change, and ethical concerns. The study also points out that AI enhances learners' skills, promoting the personalization of pathways, the identification of struggling students, the adaptation of materials, as well as real-time engagement and monitoring. It also makes it possible to model and transmit knowledge through the creation and adaptation of digital educational resources. However, AI also presents certain limitations in the educational context, such as excessive dependence on technology, inequalities of access, automatic generation of answers without real learning, as well as issues relating to the confidentiality of personal data. AI is a powerful but complex lever in the field of education. Its effective integration requires targeted training for teachers, critical reflection on its uses, and a rigorous ethical framework. This review thus provides a solid basis for guiding future research towards complementary empirical studies, while accompanying practitioners in a reasoned and beneficial adoption of AI in educational contexts.
[...] Read more.The new and emerging challenges posed by the convergence of cyber threats and socio-political tensions have risen as one of the core formidable threats to the present global security landscape. This paper proposes a hybrid predictive model intended to act against these real-world multidimensional attack vectors. The model integrates cyber threat hunting techniques with socio-political risk assessment methodologies to comprehensively forecast consequent cybersecurity threats to social unrest scenarios. Cyber threat data is collected from sources such as the Offensive Defensive-Intrusion Detection System (OD-IDS2022) and the Aegean Wi-Fi Intrusion Dataset (AWID3), and social terror attack information is gathered from the Global Database of Events, Language, and Tone (GDLET) Project and Armed Conflict Location & Event Data (ACLED) to comprise the bidirectional dataset for the model that contains views from both cyber and socio-political risk landscapes. The model adopts a holistic, robust predictive capability through k-fold cross-validation and feature importance evaluation implementation techniques. This multidisciplinary approach offers a synoptic understanding of emerging and future security threats and enables the execution of proactive measures to secure national and transnational borders.
[...] Read more.