ISSN: 2074-904X (Print)
ISSN: 2074-9058 (Online)
DOI: https://doi.org/10.5815/ijisa
Website: https://www.mecs-press.org/ijisa
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 140
IJISA is committed to bridge the theory and practice of intelligent systems. From innovative ideas to specific algorithms and full system implementations, IJISA publishes original, peer-reviewed, and high quality articles in the areas of intelligent systems. IJISA is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of intelligent systems and applications.
IJISA has been abstracted or indexed by several world class databases: Scopus, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
IJISA Vol. 17, No. 6, Dec. 2025
REGULAR PAPERS
This paper explores the application of machine learning to enhance boiler efficiency and cost management at a Uranium Mine in Africa. The current steam control system relies on a feedforward loop, which adjusts based on slurry flow into the leach tank, and a feedback loop, which regulates steam to a setpoint. However, this method is inefficient, as it does not account for slurry temperature variations, leading to unstable control and suboptimal steam usage. To address these limitations, this study applies the Extra Trees algorithm to predict steam demand more accurately. The data-driven approach achieves a 6.6% reduction in steam consumption and a 2% decrease in heavy fuel oil (HFO) usage, resulting in cost savings and improved sustainability. Based on multiple evaluation metrics, the Extra Trees model proved to be the most accurate and consistent algorithm, achieving a 96.67% R-squared score and a Root Mean Square Error (RMSE) of 1131.37 kg, indicating minimal deviation between actual and predicted values. The findings highlight the shortcomings of traditional control strategies under fluctuating conditions and demonstrate how advanced feature engineering enhances predictive accuracy. By integrating machine learning into operational workflows, this research provides actionable insights to improve boiler performance, process stability, and overall efficiency.
[...] Read more.Thunderstorms are weather disturbances that can cause lightning, stormy winds, dense clouds, tornadoes, and heavy rain. Thunderstorms can cause extensive damage to people's lives, property, and economies, as well as livestock and national infrastructure. Early warning of thunderstorms can save people's lives and property. Previous thunderstorm prediction research did not develop a system for daily thunderstorm prediction with high accuracy for Bangladeshi citizens by assessing a wide range of meteorological variables. To address this issue, this work develops a daily high accuracy based localized thunderstorm event prediction system that analyzes various meteorological factors, dates, and specific location information. This dataset was analyzed using a variety of machine learning models, including traditional statistical models like ARMA, ARIMA, and SARIMA, as well as XGBoost ensemble methods and some deep learning models such as ANN, LSTM, and GRU. The results show that advanced neural network models, particularly GRU and LSTM, outperform others in terms of RMSE, R2, MAE, and MAPE. The GRU model outperformed all other schemes, with an RMSE of 0.794, R2 of 0.998, MAE of 0.476, and MAPE of 3.544%. The mobile application provides users with accurate, localized thunderstorm forecasts, allowing for better safety, event planning, and environmental preparedness. User feedback-based mobile app assessment confirms that more than 55% of users are highly satisfied with the thunderstorm assistance app’s features and usefulness.
[...] Read more.Understanding the prevalence of genetic dwarfism and developing detection techniques are major difficulties. Genetic dwarfism is defined by below-average stature resulting from genetic alterations. In addition to advances in detection through machine learning algorithms, this abstract investigates the analytical interpretation and comparison of genetic dwarfism statistics. In the first section, we explore the epidemiological context of genetic dwarfism, including prevalence rates, frequencies of genetic mutations, and the range of clinical presentations in various groups. The figures emphasize the intricacy of genetic variants that lead to dwarfism and emphasize the necessity for rigorous analytical methods. Improving detection and diagnostic precision through the use of machine learning algorithms appears to be a potential approach. Machine learning algorithms are trained to identify minor patterns suggestive of genetic dwarfism by utilizing datasets that include genetic profiles, medical histories, and phenotypic features. Effective methods for determining genetic markers and forecasting clinical outcomes related to dwarfism include supervised learning algorithms (e.g., decision trees, support vector machines) and deep learning architectures e.g., Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs), Recurrent Neural Networks (RNNs), Autoencoders, Capsule Networks (CapsNets), Graph Convolutional Networks (GCNs), and Long Short-Term Memory (LSTM) networks). A side-by-side comparison highlights the benefits and drawbacks of machine learning techniques over conventional diagnostic techniques. Large-scale genetic data procshines but subtle pattern detection are areas where machine learning
shines but deciphering intricate genetic connections and guaranteeing model interpretability in clinical settings continue to be difficult tasks. Moreover, the interdisciplinary aspect of tackling genetic dwarfism with modern computational tools is highlighted by ethical problems pertaining to data privacy, informed consent, and equitable access to genetic testing. Ultimately, this abstract summarizes the state of the art on genetic dwarfism statistics and machine learning applications, promoting ongoing multidisciplinary cooperation to maximize the effectiveness of therapeutic approaches and diagnosis for people with genetic dwarfism.
This paper presents an ensemble model in the determination of manifestation of emotion intensities from audio-dataset. An emotion denotes the mental state of the human mind or/and thought processes that represents a recognizable pattern of an entity like emotion arousal having a good similarity with its manifestation of vocal, facial or/and bodily signals. In this paper, we propose a stacking, late fusion approach where the best experimental outcome from two base models build from Random Forests and Extreme Gradient Boost are combined using simple majority voting. RAVDESS audio datasets, a public gender balanced dataset built by Ryerson University of Canada for the purpose of emotion study was used. 80% of the dataset was used for training while 20% was used for testing. Two features, MFCC and Chroma were introduced to the base models in a series of experimental setups and the outcome evaluated using confusion matrix, precision, recall and F1-Score. It was then compared to two state-of-the-art works done on KBES and RAVDESS datasets. This approach yielded an overall classification accuracy of 93%.
[...] Read more.Multi-Focus Image Fusion (MFIF) plays an important role in the field of computer vision. It aims to merge multiple images that possess different focus depths, resulting in a single image with a focused appearance. Though deep learning based methods have demonstrated development in the MFIF field, they vary significantly with regard to fusion quality and robustness to different focus changes. This paper presents the performance analysis of three deep learning-based MFIF methods specifically ECNN (Ensemble based Convolutional Neural Network), DRPL (Deep Regression Pair Learning) and SESF-Fuse. These techniques have been selected due to their publicly availability of training and testing source code, facilitating a thorough and reproducible analysis along with their diverse architectural approaches to MFIF. For training, three datasets were used ILSVRC2012, COCO2017, and DIV2K. The performance of the techniques was evaluated on two publicly available MFIF datasets: Lytro and RealMFF datasets using four objective evaluation metrics viz. Mutual Information, Gradient based metric, Piella metric and Chen-Varshney metric. Extensive experiments were conducted both qualitatively and quantitatively to analyze the effectiveness of each technique in terms of preserving details, artifacts reduction, consistency at the boundary region, texture fidelity etc. which jointly determine the feasibility of these methods for real-world applications. Ultimately, the findings illuminate the strengths and limitations of these deep learning approaches, providing valuable insights for future research and development in methodologies for MFIF.
[...] Read more.Non-intrusive load monitoring (NILM) aims to estimate the operational states and power consumption of individual household appliances, providing real-time insights into energy usage for effective energy management and improved demand side response strategies. This study addresses the challenge of accurate energy disaggregation of household energy consumption data into individual appliances’ consumption, an important requirement for effective energy management in smart homes. Traditional energy monitoring systems provide only aggregate data, limiting the ability to optimize energy consumption. To overcome these difficulties, this study proposes a Convolutional Neural Network (CNN)-based model for Non-Intrusive Load Monitoring (NILM) that disaggregates total energy usage into appliance-specific consumption for five key appliances: kettle, microwave, fridge, dishwasher, and washing machine. Unlike the previous approaches, our model integrates a hybrid dataset from UK-DALE and REFIT, leveraging data fusion techniques to enhance generalization. The CNN architecture employed uses five convolutional layers for effective feature extraction, capturing temporal dependencies in appliance usage patterns and thus results in an improved MAE and SAE when compared to similar published results. The preprocessing and hybridization stage involves such processes as missing data imputation, appliance state labelling, feature normalization and merging of the datasets. The developed model achieved an overall accuracy of 98.3% and an F1-score of 81.7% in seen scenarios, while in unseen environments, it attained 96.5% accuracy and an F1-score of 58.1% when tested on the UK-DALE dataset. The seen scenario refers to testing using UK-DALE House 1 and REFIT House 2 data of the validation dataset, whereas the unseen scenario involves entirely new house data not used during training and validation. It is shown that post-processing techniques reduce errors, highlighting its effectiveness, which help to enhance the model's predictive accuracy. This study contributes to the advancement of NILM technologies by combining datasets, offering a robust and scalable solution for individual appliace energy monitoring, with significant implications on energy conservation and smart home efficiency.
[...] Read more.Accurate histopathological image classification plays a crucial role in cancer detection and diagnosis. In automated cancer detection methods, extraction of histological features of malignant and benign tissues is a challenging task. This paper presents a modified approach on octave convolution to extract high and low-frequency features which help to provide a comprehensive representation of histopathological images. Proposed octave convolution model is used to perform histopathological image classification using three different optimization strategies. Firstly, an optimal alpha value of 0.5 is used to give equal importance to both high-frequency and low-frequency feature maps. This balanced approach ensures that the model effectively considers critical high-frequency features as well as low-frequency features of cancerous tissues. Secondly, high-frequency and low-frequency feature maps are extracted and down sampled into half the spatial dimension size to reduce the computational cost compared to standard CNN. Thirdly, training and validation was conducted using ReLU, PReLU, LeakyReLU, ELU, GELU and Swish activation functions. From the experiment, it was concluded that PReLU is the best activation function for capturing intricate patterns inherent in cancer-related histopathological images. Combining all these optimization strategies, the proposed method proved to provide a classification accuracy of 93% and also to reduce the computational cost by 50%. Performance validation against pre-trained models, CNN variants and vision transformer-based models has also been conducted, which proved superior performance of the proposed model.
[...] Read more.Multi-objective optimization problems are crucial in real-world scenarios, where multiple solutions exist rather than a single one. Traditional methods like PERT/CPM often struggle to address such problems effectively. Meta- heuristic techniques, such as genetic algorithms and non-dominated sorting genetic algorithms (NSGA-II), are well- suited for finding true Pareto-optimal solutions. This paper introduces an enhanced NSGA-II algorithm, which utilizes Sobol sequences for initial population generation, ensuring uniform search space coverage and faster convergence. The proposed algorithm is validated using benchmark problems from the ZDT test suite and compared with state-of-the- art algorithms. Additionally, real-world optimization problems in project management, particularly the time-cost trade- off (TCT) problem, are solved using the enhanced NSGA-II. The performance evaluation includes key metrics such as standard deviation, providing a comprehensive assessment of the algorithm’s efficiency. Experimental results confirm that the proposed method outperforms traditional NSGA-II and other meta-heuristic algorithms in maintaining a well- distributed Pareto front while ensuring computational efficiency.
[...] Read more.This study investigates logistic regression, linear support vector machine, multinomial Naive Bayes, and Bernoulli Naive Bayes for classifying Libyan dialect utterances gathered from Twitter. The dataset used is the QADI corpus, which consists of 540,000 sentences across 18 Arabic dialects. Preprocessing challenges include handling inconsistent orthographic variations and non-standard spellings typical of the Libyan dialect. The chi-square analysis revealed that certain features, such as email mentions and emotion indicators, were not significantly associated with dialect classification and were thus excluded from further analysis. Two main experiments were conducted: (1) evaluating the significance of meta-features extracted from the corpus using the chi-square test and (2) assessing classifier performance using different word and character n-gram representations. The classification experiments showed that Multinomial Naive Bayes (MNB) achieved the highest accuracy of 85.89% and an F1-score of 0.85741 when using a (1,2) word n-gram and (1,5) character n-gram representation. In contrast, Logistic Regression and Linear SVM exhibited slightly lower performance, with maximum accuracies of 84.41% and 84.73%, respectively. Additional evaluation metrics, including log loss, Cohen’s kappa, and Matthew’s correlation coefficient, further supported the effectiveness of MNB in this task. The results indicate that carefully selected n-gram representations and classification models play a crucial role in improving the accuracy of Libyan dialect identification. This study provides empirical benchmarks and insights for future research in Arabic dialect NLP applications.
[...] Read more.One of the biggest causes of cancer-related fatalities among women is still Cervical cancer, especially in low and middle-income nations where access to broad screening and early detection may be limited. Cervical cancer is curable if detected in its early stages, but asymptomatic progression frequently results in late diagnosis, which makes treatment more difficult and lowers survival chances. Even though they work well, current screening methods including liquid-based cytology and Pap smears have drawbacks in terms of consistency, sensitivity, and specificity. Recent developments in Deep Learning and Artificial Intelligence have shown promise for greatly improving Cervical cancer detection and diagnosis. In this work, we have introduced CervixCan-Net, a novel Deep Learning based model created for the precise classification of Cervical cancer from histopathology images. Our approach offers a solid and dependable classification solution by addressing common problems like overfitting and computational inefficiency. CervixCan-Net performs better than many state-of-the-art models according to a comparison investigation. CervixCan-Net, with an impressive test accuracy of 99.83%, provides a scalable, automated Cervical cancer classification solution that has great promise for improving patient outcomes and diagnostic accuracy.
[...] Read more.Cyberbullying is an intentional action of harassment along the complex domain of social media utilizing information technology online. This research experimented unsupervised associative approach on text mining technique to automatically find cyberbullying words, patterns and extract association rules from a collection of tweets based on the domain / frequent words. Furthermore, this research identifies the relationship between cyberbullying keywords with other cyberbullying words, thus generating knowledge discovery of different cyberbullying word patterns from unstructured tweets. The study revealed that the type of dominant frequent cyberbullying words are intelligence, personality, and insulting words that describe the behavior, appearance of the female victims and sex related words that humiliate female victims. The results of the study suggest that we can utilize unsupervised associative approached in text mining to extract important information from unstructured text. Further, applying association rules can be helpful in recognizing the relationship and meaning between keywords with other words, therefore generating knowledge discovery of different datasets from unstructured text.
[...] Read more.The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.
[...] Read more.Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.
[...] Read more.Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.
[...] Read more.The proliferation of Web-enabled devices, including desktops, laptops, tablets, and mobile phones, enables people to communicate, participate and collaborate with each other in various Web communities, viz., forums, social networks, blogs. Simultaneously, the enormous amount of heterogeneous data that is generated by the users of these communities, offers an unprecedented opportunity to create and employ theories & technologies that search and retrieve relevant data from the huge quantity of information available and mine for opinions thereafter. Consequently, Sentiment Analysis which automatically extracts and analyses the subjectivities and sentiments (or polarities) in written text has emerged as an active area of research. This paper previews and reviews the substantial research on the subject of sentiment analysis, expounding its basic terminology, tasks and granularity levels. It further gives an overview of the state- of – art depicting some previous attempts to study sentiment analysis. Its practical and potential applications are also discussed, followed by the issues and challenges that will keep the field dynamic and lively for years to come.
[...] Read more.Addressing scheduling problems with the best graph coloring algorithm has always been very challenging. However, the university timetable scheduling problem can be formulated as a graph coloring problem where courses are represented as vertices and the presence of common students or teachers of the corresponding courses can be represented as edges. After that, the problem stands to color the vertices with lowest possible colors. In order to accomplish this task, the paper presents a comparative study of the use of graph coloring in university timetable scheduling, where five graph coloring algorithms were used: First Fit, Welsh Powell, Largest Degree Ordering, Incidence Degree Ordering, and DSATUR. We have taken the Military Institute of Science and Technology, Bangladesh as a test case. The results show that the Welsh-Powell algorithm and the DSATUR algorithm are the most effective in generating optimal schedules. The study also provides insights into the limitations and advantages of using graph coloring in timetable scheduling and suggests directions for future research with the use of these algorithms.
[...] Read more.Climate change, a significant and lasting alteration in global weather patterns, is profoundly impacting the stability and predictability of global temperature regimes. As the world continues to grapple with the far-reaching effects of climate change, accurate and timely temperature predictions have become pivotal to various sectors, including agriculture, energy, public health and many more. Crucially, precise temperature forecasting assists in developing effective climate change mitigation and adaptation strategies. With the advent of machine learning techniques, we now have powerful tools that can learn from vast climatic datasets and provide improved predictive performance. This study delves into the comparison of three such advanced machine learning models—XGBoost, Support Vector Machine (SVM), and Random Forest—in predicting daily maximum and minimum temperatures using a 45-year dataset of Visakhapatnam airport. Each model was rigorously trained and evaluated based on key performance metrics including training loss, Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), R2 score, Mean Absolute Percentage Error (MAPE), and Explained Variance Score. Although there was no clear dominance of a single model across all metrics, SVM and Random Forest showed slightly superior performance on several measures. These findings not only highlight the potential of machine learning techniques in enhancing the accuracy of temperature forecasting but also stress the importance of selecting an appropriate model and performance metrics aligned with the requirements of the task at hand. This research accomplishes a thorough comparative analysis, conducts a rigorous evaluation of the models, highlights the significance of model selection.
[...] Read more.Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.
[...] Read more.Along with the growth of the Internet, social media usage has drastically expanded. As people share their opinions and ideas more frequently on the Internet and through various social media platforms, there has been a notable rise in the number of consumer phrases that contain sentiment data. According to reports, cyberbullying frequently leads to severe emotional and physical suffering, especially in women and young children. In certain instances, it has even been reported that sufferers attempt suicide. The bully may occasionally attempt to destroy any proof they believe to be on their side. Even if the victim gets the evidence, it will still be a long time before they get justice at that point. This work used OCR, NLP, and machine learning to detect cyberbullying in photos in order to design and execute a practical method to recognize cyberbullying from images. Eight classifier techniques are used to compare the accuracy of these algorithms against the BoW Model and the TF-IDF, two key features. These classifiers are used to understand and recognize bullying behaviors. Based on testing the suggested method on the cyberbullying dataset, it was shown that linear SVC after OCR and logistic regression perform better and achieve the best accuracy of 96 percent. This study aid in providing a good outline that shapes the methods for detecting online bullying from a screenshot with design and implementation details.
[...] Read more.In this paper, a new acquisition protocol is adopted for identifying individuals from electroencephalogram signals based on eye blinking waveforms. For this purpose, a database of 10 subjects is collected using Neurosky Mindwave headset. Then, the eye blinking signal is extracted from brain wave recordings and used for the identification task. The feature extraction stage includes fitting the extracted eye blinks to auto-regressive model. Two algorithms are implemented for auto-regressive modeling namely; Levinson-Durbin and Burg algorithms. Then, discriminant analysis is adopted for classification scheme. Linear and quadratic discriminant functions are tested and compared in this paper. Using Burg algorithm with linear discriminant analysis, the proposed system can identify subjects with best accuracy of 99.8%. The obtained results in this paper confirm that eye blinking waveform carries discriminant information and is therefore appropriate as a basis for person identification methods.
[...] Read more.Cyberbullying is an intentional action of harassment along the complex domain of social media utilizing information technology online. This research experimented unsupervised associative approach on text mining technique to automatically find cyberbullying words, patterns and extract association rules from a collection of tweets based on the domain / frequent words. Furthermore, this research identifies the relationship between cyberbullying keywords with other cyberbullying words, thus generating knowledge discovery of different cyberbullying word patterns from unstructured tweets. The study revealed that the type of dominant frequent cyberbullying words are intelligence, personality, and insulting words that describe the behavior, appearance of the female victims and sex related words that humiliate female victims. The results of the study suggest that we can utilize unsupervised associative approached in text mining to extract important information from unstructured text. Further, applying association rules can be helpful in recognizing the relationship and meaning between keywords with other words, therefore generating knowledge discovery of different datasets from unstructured text.
[...] Read more.Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.
[...] Read more.Addressing scheduling problems with the best graph coloring algorithm has always been very challenging. However, the university timetable scheduling problem can be formulated as a graph coloring problem where courses are represented as vertices and the presence of common students or teachers of the corresponding courses can be represented as edges. After that, the problem stands to color the vertices with lowest possible colors. In order to accomplish this task, the paper presents a comparative study of the use of graph coloring in university timetable scheduling, where five graph coloring algorithms were used: First Fit, Welsh Powell, Largest Degree Ordering, Incidence Degree Ordering, and DSATUR. We have taken the Military Institute of Science and Technology, Bangladesh as a test case. The results show that the Welsh-Powell algorithm and the DSATUR algorithm are the most effective in generating optimal schedules. The study also provides insights into the limitations and advantages of using graph coloring in timetable scheduling and suggests directions for future research with the use of these algorithms.
[...] Read more.Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.
[...] Read more.Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.
[...] Read more.Along with the growth of the Internet, social media usage has drastically expanded. As people share their opinions and ideas more frequently on the Internet and through various social media platforms, there has been a notable rise in the number of consumer phrases that contain sentiment data. According to reports, cyberbullying frequently leads to severe emotional and physical suffering, especially in women and young children. In certain instances, it has even been reported that sufferers attempt suicide. The bully may occasionally attempt to destroy any proof they believe to be on their side. Even if the victim gets the evidence, it will still be a long time before they get justice at that point. This work used OCR, NLP, and machine learning to detect cyberbullying in photos in order to design and execute a practical method to recognize cyberbullying from images. Eight classifier techniques are used to compare the accuracy of these algorithms against the BoW Model and the TF-IDF, two key features. These classifiers are used to understand and recognize bullying behaviors. Based on testing the suggested method on the cyberbullying dataset, it was shown that linear SVC after OCR and logistic regression perform better and achieve the best accuracy of 96 percent. This study aid in providing a good outline that shapes the methods for detecting online bullying from a screenshot with design and implementation details.
[...] Read more.Cloud computing refers to a sophisticated technology that deals with the manipulation of data in internet-based servers dynamically and efficiently. The utilization of the cloud computing has been rapidly increased because of its scalability, accessibility, and incredible flexibility. Dynamic usage and process sharing facilities require task scheduling which is a prominent issue and plays a significant role in developing an optimal cloud computing environment. Round robin is generally an efficient task scheduling algorithm that has a powerful impact on the performance of the cloud computing environment. This paper introduces a new approach for round robin based task scheduling algorithm which is suitable for cloud computing environment. The proposed algorithm determines time quantum dynamically based on the differences among three maximum burst time of tasks in the ready queue for each round. The concerning part of the proposed method is utilizing additive manner among the differences, and the burst times of the processes during determining the time quantum. The experimental results showed that the proposed approach has enhanced the performance of the round robin task scheduling algorithm in reducing average turn-around time, diminishing average waiting time, and minimizing number of contexts switching. Moreover, a comparative study has been conducted which showed that the proposed approach outperforms some of the similar existing round robin approaches. Finally, it can be concluded based on the experiment and comparative study that the proposed dynamic round robin scheduling algorithm is comparatively better, acceptable and optimal for cloud environment.
[...] Read more.The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.
[...] Read more.Alzheimer’s illness is an ailment of mind which results in mental confusion, forgetfulness and many other mental problems. It effects physical health of a person too. When treating a patient with Alzheimer's disease, a proper diagnosis is crucial, especially into earlier phases of condition as when patients are informed of the risk of the disease, they can take preventative steps before irreparable brain damage occurs. The majority of machine detection techniques are constrained by congenital (present at birth) data, however numerous recent studies have used computers for Alzheimer's disease diagnosis. The first stages of Alzheimer's disease can be diagnosed, but illness itself cannot be predicted since prediction is only helpful before it really manifests. Alzheimer’s has high risk symptoms that effects both physical and mental health of a patient. Risks include confusion, concentration difficulties and much more, so with such symptoms it becomes important to detect this disease at its early stages. Significance of detecting this disease is the patient gets a better chance of treatment and medication. Hence our research helps to detect the disease at its early stages. Particularly when used with brain MRI scans, deep learning has emerged as a popular tool for the early identification of AD. Here we are using a 12- layer CNN that has the layers four convolutional, two pooling, two flatten, one dense and three activation functions. As CNN is well-known for pattern detection and image processing, here, accuracy of our model is 97.80%.
[...] Read more.Timing-critical path analysis is one of the most significant terms for the VLSI designer. For the formal verification of any kinds of digital chip, static timing analysis (STA) plays a vital role to check the potentiality and viability of the design procedures. This indicates the timing status between setup and holding times required with respect to the active edge of the clock. STA can also be used to identify time sensitive paths, simulate path delays, and assess Register transfer level (RTL) dependability. Four types of Static Random Access Memory (SRAM) controllers in this paper are used to handle with the complexities of digital circuit timing analysis at the logic level. Different STA parameters such as slack, clock skew, data latency, and multiple clock frequencies are investigated here in their node-to-node path analysis for diverse SRAM controllers. Using phase lock loop (ALTPLL), single clock and dual clock are used to get the response of these controllers. For four SRAM controllers, the timing analysis shows that no data violation exists for single and dual clock with 50 MHz and 100 MHz frequencies. Result also shows that the slack for 100MHz is greater than that of 50MHz. Moreover, the clock skew value in our proposed design is lower than in the other three controllers because number of paths, number of states are reduced, and the slack value is higher than in 1st and 2nd controllers. In timing path analysis, slack time determines that the design is working at the desired frequency. Although 100MHz is faster than 50MHz, our proposed SRAM controller meets the timing requirements for 100MHz including the reduction of node to node data delay. Due to this reason, the proposed controller performs well compared to others in terms slack and clock skew.
[...] Read more.