An intrusion detection system (IDS) is either a part of a software or hardware environment that monitors data and analyses it to identify any attacks made against a system or a network. Traditional IDS approaches make the system more complicated and less efficient, because the analytical properties process is difficult and time-consuming. This is because the procedure is complex. Therefore, this research work focuses on a network intrusion detection and classification (NIDCS) system using a modified convolutional neural network (MCNN) with recursive feature elimination (RFE). Initially, the dataset is balanced with the help of the local outlier factor (LOF), which finds anomalies and outliers by comparing the amount of deviation that a single data point has with the amount of deviation that its neighbors have. Then, a feature extraction selection approach named RFE is applied to eliminate the weakest features until the desired number of features is achieved. Finally, the optimal features are trained with the MCNN classifier, which classifies intrusions like probe, denial-of-service (DoS), remote-to-user (R2U), user-to-root (U2R), and identifies normal data. The proposed NIDCS system resulted in higher performance with 99.3% accuracy and a 3.02 false alarm rate (FAR) as equated to state-of-the-art NIDCS approaches such as deep neural networks (DNN), ResNet, and gravitational search algorithms (GSA).
[...] Read more.Sleep is a critical biological process required for physical recovery, cognitive function, emotional regulation, and sound health. Conventional techniques for evaluating the quality of sleep are usually costly and intrusive, especially when they use sleep clinics and advanced sensors. Instead of using several factors to predict sleep quality, the majority of earlier studies only employed one factor and a short dataset. Their results were less accurate since they did not apply machine learning to look into the cause of poor sleep quality. This paper initiates a machine-learning (ML) based method for assessing and predicting sleep quality using a larger dataset and the Pittsburgh Sleep Quality Index (PSQI). To find the best machine learning model for predicting sleep quality, the proposed system tests eight classifiers. The results show that the Cat Boost classifier outperforms other models, with an accuracy value of 90.1%, precision value of 87%, recall value of 88%, and f1-score value of 87%. The proposed prediction model also outperformed previous works in terms of accuracy, precision, and recall by 12%, 8%, and 11%, respectively. This paper also describes a web application with features such as personalized sleep quality prediction, result checking, improvement suggestions, and doctor consultation services. According to the review results, up to 65 percent of users agreed that the proposed sleep quality assistance web application features were appropriate and necessary.
[...] Read more.In recent years, the rising prevalence of chronic illness has led to an increase in disability of patients. Extensive research has been done to enhance both the functional abilities as well as the quality of the affected individuals’ lives. Researchers have worked on the effects of numerous scholars, keywords and countries of these specific fields. However, a few state-of-the-art bibliometric analyses have been done in this research to reduce the quantitative aspects of the vast research fields of rehabilitation. We have precisely selected 427 core papers from the Web of Science database spanning from 1999 to 2022 where Machine Learning (ML) or Deep Learning (DL) is used in the rehabilitation field. Consequently, our analysis focuses on citation patterns, trend analysis and collaborations between countries or influential keywords offering a detailed overview of global trends in this interdisciplinary domain. Additionally, we visualize the research trends of various authors and countries which provide invaluable insights into research impact as well as collaboration networks. Overall, this paper aims to shape the evolving field of rehabilitation by providing in depth analysis of the citation landscape, key researchers, and international collaborations.
[...] Read more.Autism spectrum disorder (ASD) is a neurological issue that impacts brain function at an earlier stage. The autistic person realizes several complexities in communication or social interaction. ASD detection from face images is complicated in the field of computer vision. In this paper, a hybrid GEfficient-Net with a Gray-Wolf (GWO) optimization algorithm for detecting ASD from facial images is proposed. The proposed approach combines the advantages of both EfficientNet and GoogleNet. Initially, the face image from the dataset is pre-processed, and the facial features are extracted with the VGG-16 feature extraction technique. It extracts the most discriminative features by learning the representation of each network layer. The hyperparameters of GoogleNet are optimally selected with the GWO algorithm. The proposed approach is uniformly scaled in all directions to enhance performance. The proposed approach is implemented with the Autistic children’s face image dataset, and the performance is computed in terms of accuracy, sensitivity, specificity, G-mean, etc. Moreover, the proposed approach improves the accuracy to 0.9654 and minimizes the error rate to 0.0512. The experimental outcomes demonstrate the proposed ASD diagnosis has achieved better performance.
[...] Read more.During the implementation of the work on the creation of the system of tonality recognition and text categorization in the news, a study of the subject area was conducted, which allowed the understanding of the processes of text analysis in the mass media to be enriched. The necessary data for further processing was found. The work resulted from a program that consists of an information parser, a data analyser and cleaner, a Large Language Models model, a neural network, and a database with vectorized data. These components were integrated into the user interface and implemented as a program window. The program can analyse news texts, determining their tone and categories. At the same time, it provides the user with a convenient interface for entering text and receiving analysis results. Therefore, the created system is a powerful tool for automated analysis of textual data in mass media, which can be used for various purposes, including monitoring the news space, analysis of public opinion, and others. Also, the developed information technology successfully meets the set tasks aimed at tonality analysis and categorization of news. It effectively solves the task of collecting, analysing and classifying news materials, which allows users to receive operational and objective information. Its architecture and functionality allow for easy changes and additions in the
future, making it a flexible and adaptable tool for news analytics and decision-making in various business sectors.
Code-switching, which is the mixing of words or phrases from multiple, grammatically distinct languages, introduces semantic and syntactic complexities to sentences which complicate automated text classification. Despite code-switching being a common occurrence in informal text-based communication among most bilingual or multilingual users of digital spaces, its use to spread misinformation is relatively less explored. In Kenya, for instance, the use of code-switched Swahili-English is prevalent on social media. Our main objective in this paper was to systematically re- view code-switching, particularly the use of Swahili-English code-switching to spread misinformation on social media in the Kenyan context. Additionally, we aimed at pre-processing a Swahili-English code-switched dataset and developing a misinformation classification model trained on this dataset. We discuss the process we took to develop the code- switched Swahili-English misinformation classification model. The model was trained and tested using the PolitiKweli dataset which is the first Swahili-English code-switched dataset curated for misinformation classification. The dataset was collected from Twitter (now X) social media platform, focusing on text posted during the electioneering period of the 2022 general elections in Kenya. The study experimented with two types of word embeddings - GloVe and FastText. FastText uses character n-gram representations that help generate meaningful vectors for rare and unseen words in the code-switched dataset. We experimented with both the classical machine learning algorithms and deep learning algo- rithms. Bidirectional Long Short-Term Memory Networks (BiLSTM) algorithm showed the best performance with an f-score of 0.89. The model was able to classify code-switched Swahili-English political misinformation text as fake, fact or neutral. This study contributes to recent research efforts in developing language models for low-resource languages.
[...] Read more.About one person dies every minute from cardiovascular disease; consequently, it has almost surpassed war as the largest cause of death in the twenty-first century. In cardiology, early and accurate diagnosis of heart illness is a cornerstone of effective healthcare. Predictive analytics, which involves machine-learning algorithms, can be a great option for contributing towards the early detection of cardiovascular disease. This study evaluates the data preprocessing techniques involved in building machine learning models to predict cardiovascular disease and identify the features contributing to the cardio attack. A novel data transformation technique named the superlative boundary binning method was proposed to enhance machine learning and ensemble learning classification models for predicting cardiac illness based on independent physiological feature parameters. The results revealed that the ensemble learning classifier AdaBoost using the superlative boundary binning method has performed well with a classification accuracy of 93% when compared with the other data transformation and machine learning classifier models.
[...] Read more.This research paper introduces SecretCentric, an innovative automated hardware-based password management system addressing the challenges of widely used password authentication methods, which have long been criticized for their poor performance. Password management plays a crucial role in protecting users' digital security and privacy, with key factors including password generation, storage, renewal, and reuse mitigation. Although numerous password managers and solutions have been introduced to tackle these challenges, password management automation has never been thoroughly explored. This study aims to revolutionize the field by eliminating the burden of manual password management from users by automating the entire process. Upon concluding a comprehensive survey, insights into user perceptions of password management and prevalent malpractices were identified. SecretCentric was designed to maximize the security and usability trade-off aligning with identified user expectations. Preliminary evaluations indicate that SecretCentric offers significant improvements over existing options, highlighting the necessity for an automated solution that balances security and usability in the era of increasing online services. The system's success demonstrates the importance of proper password management rather than replacement, contributing to research advancement in user authentication and credential management.
[...] Read more.In the work, an analysis of modern methods of Educational Data Mining (EDM) was carried out, on the basis of which a set of methods of EDM was developed for the training of vocational education teachers. The basic methods of EDM are considered, namely Prediction, Clustering, Relationship Mining, Distillation of Data for Human Judgment, Discovery with Models. The possibilities of using artificial neural networks, in particular, networks of Long-Short-Term Memory (LSTM), to predict the results of the educational process are described. The main methods of clustering and segmentation of educational data are considered. The basic methods of EDM are complemented by specialized methods of digital image pre-processing and methods of artificial intelligence, taking into account the peculiarities of the training of future specialists in engineering and pedagogical specialties. As specialized methods of digital image pre-processing, methods of filtering, contrast enhancement and contour selection are used. As specialized methods of artificial intelligence, methods of image segmentation, object detection on images, object detection using fuzzy logic were used. Methods of object detection on images using convolutional neural networks and using the Viola-Jones method are described. To process data with a certain degree of uncertainty, it is proposed to apply the methods of EDM and Fuzzy Logic in a integral manner. Ways of integrating Fuzzy Logic with methods of data clustering, image segmentation and object detection on images are considered. The possibilities of applying the developed complex of specialized methods of EDM in the educational process, in particular, when performing STEM (Science, Technology, Engineering and Mathematics) projects, are described.
[...] Read more.The 802.11 ac protocol is widely utilized in local-area-networks with wireless access (WLANs) because of its effective 5GHz networking technology. Several path-loss and link-speed (LS) prediction models have previously been employed to aid in the effective design of 802.11 WLAN systems that predict the received-signal-strength (RSS), and LS between the client and the access-point (AP). However, majority of them fail to account for numerous indoor propagation phenomena that affect signal propagation in complex environments. This includes the shadowing that influences RSS, especially in a network system with multiple moving parts and small-scale fading, where signal reflections, obstacles, and dispersion lead to RSS fluctuations. Therefore, taking into account shadow fading influence in the LS estimation model is critical for enhancing estimation accuracy. Previously, we proposed modification of the simple log-distance model by taking shadowing variables into account which dynamically optimize the RSS and LS estimation precision of the previous model. Though our modified model outperforms the prior model, the model’s accuracy has not been evaluated in comparison to a wide range of other mathematical models. In this paper, we present the performance investigation of various estimation models for RSS and LS estimations of 802.11ac WLANs under various scenarios and analysis their performance accuracy by considering several statistical error models. To test its relative effectiveness the proposed modified model's performance is also compared against two existing machine learning (ML) approaches. To calculate the models parameters including shadowing factor, we first show the experimental results of RSS and LS of the 802.11ac MU-MIMO link. Then, we tune the path-loss exponent, shadowing factors, and other parameters of models by taking into account experimental data. Our estimation results indicate that our modified model is more precise than the other mathematical estimation models and its accuracy is very similar to the random forest (RF) ML model, in an extensive variety of consequences with less error.
[...] Read more.