ISSN: 2074-904X (Print)
ISSN: 2074-9058 (Online)
DOI: https://doi.org/10.5815/ijisa
Website: https://www.mecs-press.org/ijisa
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 139
IJISA is committed to bridge the theory and practice of intelligent systems. From innovative ideas to specific algorithms and full system implementations, IJISA publishes original, peer-reviewed, and high quality articles in the areas of intelligent systems. IJISA is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of intelligent systems and applications.
IJISA has been abstracted or indexed by several world class databases: Scopus, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
IJISA Vol. 17, No. 5, Oct. 2025
REGULAR PAPERS
Kidney stones are solid mineral and salt deposits formed within the kidneys, causing excruciating discomfort and pain when they obstruct the urinary tract. The presence of speckle noise in CT-scan images, coupled with the limitations of manual interpretation, makes kidney stone detection challenging and highlighting the need for precise and efficient diagnosis. This research investigates the efficacy of YOLOv8 models for kidney stone detection, aiming to strike a balance between computational efficiency and detection accuracy. This study's novel evaluation framework and practical deployment considerations underscore its contributions to advance kidney stone detection technologies. It evaluates five YOLOv8 variants (nano, small, medium, large, and extra-large) using standard metrics such as precision, recall, F1-score, and mAP@50, alongside computational resources like training time, power consumption, and memory usage. The comprehensive evaluation reveals that while YOLOv8s and YOLOv8e demonstrate superior performance in traditional metrics, YOLOv8s emerges as the optimal model, offering a harmonious balance with its high precision (0.917), highest mAP@50 (0.918), moderate power consumption (150W), and efficient memory usage. Graphical analyses further elucidate the behaviour of each model across different confidence thresholds, confirming the robustness of YOLOv8s. Additionally, this research explores the impact of model size and complexity on inference speed, demonstrating that smaller YOLOv8 variants achieve real-time performance with minimal latency. The study also introduces a method for model scalability, allowing for adjustments in accuracy and computational demand based on specific clinical or resource constraints. These contributions further emphasize the importance of holistic model assessment for real-world medical applications.
[...] Read more.In this article, novel mixture of conditional volatility models of Generalized Autoregressive Conditional Heteroscedasticity (GARCH); Exponential GARCH (EGARCH); Glosten, Jagannathan, and Runkle GARCH (GJR-GARCH); and dependent variable-GARCH (TGARCH) were thoroughly expounded in a Bayesian paradigm. Expectation-Maximization (EM) algorithm was employed as the parameter estimation technique to work-out posterior distributions of the involved hyper-parameters after setting-up their corresponding prior distributions. Mode was considered as the stable location parameter instead of the mean, because it could robustly adapt to symmetric, skewedness, heteroscedasticity and multimodality effects simulteanously needed to redefine switching conditional variance processes conceived as mixture components based on shifting number of modes in the marginal density of Skewed Generalized Error Distribution (SGED) set as the prior random noise.
In application to ten (10) most used cryptocurrency coins and tokens via their daily open, high, low, close and volume converted and transacted in USD from the same date of inception. Binance Coin (BNB) via its daily lower units transacted in USD (that is, low-BNB), yielded the most reduced Deviance Information Criteria (DIC) of 3651.1935. The low-BNB process yielded a two-regime process of TGARCH, that is, Mixture dependent variable-GARCH (TGARCH (2: 2, 2)) with stable probabilities of 33% and 66% respectively. The first regime was attributed with low unconditional volatility of 16.96664, while the second regime was traded with high unconditional volatility of 585.6190. In summary, Binance Coin (BNB) was a mixture of tranquil market conditions and stormy market conditions. Implicatively, this implies that the first regime of the low-BNB was characterized with strong fluctuating reaction to past negative daily returns of low-BNB converted to USD, while the second regime was attributed with weak fluctuating reaction. Additionally, the first regime was attributed with low repetitive volatility process, while the second regime was characterized with high persistence fluctuating process. For financial and economic decision-making, crypocurrency users and financial bodies should look-out for financial and economic sabotage agents, like war, exchange rate instability, political crises, inflation, browsing network fluctuation etc. that arose, declined or fluctuated doing the ten (10) years to study of the coins and tokens to ascertain which of this/these agent(s) contributed to the volatility process.
Mixture models from a Bayesian perspective were of interest because; some of the classical (traditional) models cannot accommodate and absolve regime-switching traits, and as well contain prior information known about cryptocurrency coins and tokens. In light of model performance, DIC values were compared on the basis of most performed to less perform via lower to higher values of DICs.
Predicting attitudes towards people with tuberculosis is a solution for preserving public health and a means of strengthening social ties to improve resilience to health threats. The assessment of attitudes towards the sick in general is essential to understand the educational level of a given population and to measure its resilience in contributing to the health of all within the framework of community life. The case of tuberculosis is chosen in this study to highlight the need for a change in attitudes, particularly due to the preponderance of this disease in Africa. While it is clear that attitudes influence the organization of individuals and community life, it remains a challenge to put in place an effective mechanism for evaluating the metrics that contribute to determining the attitude towards people with tuberculosis. Knowledge of attitudes towards any disease is very important to understanding collective values on this disease, hence the need to predict attitudes in the case of tuberculosis in favor of health education for all social strata while targeting areas of training not yet explored or requiring capacity building among populations. Changing attitudes towards tuberculosis patients will contribute to preserving public health and will help reduce stigma, improve understanding of the disease and encourage supportive and preventive behaviors. Achieving these changes involves dismantling stereotypes, improving access to care, mobilizing the media and social networks, including people with TB in society and strengthening the commitment of public authorities. The approach adopted consists of assessing the state of attitude towards tuberculosis patients at a given time and in a specific space based on the characteristics of the different social strata living there. An analysis of several metrics provided by machine learning algorithms makes it possible to identify differences in attitudes and serve as a decision-making aid on the strategies to be implemented. This work also relies on the investigation and analysis of historical trends using machine learning algorithms to understand population attitudes towards tuberculosis patients.
[...] Read more.Heart attacks continue to be one of the primary causes of death globally, highlighting the critical need for advanced predictive models to improve early diagnosis and timely intervention. This study presents a comprehensive machine learning (ML) approach to heart attack prediction, integrating multiple datasets from diverse sources to construct a robust and accurate predictive model. The research employs a stacking ensemble model, which combines the strengths of individual ML algorithms to improve overall performance. Extensive data preprocessing steps were carefully undertaken to preserve the dataset's integrity and maintain its quality. The results demonstrate a superior accuracy of 97.48%, significantly outperforming state-of-the-art approaches. The high level of accuracy indicates the model’s potential effectiveness in the clinical setting for early detection of heart attack and prevention. However, the proposed model is influenced by the quality and diversity of the integrated datasets, which could affect its generalizability across broader populations. Challenges encountered during the model's development include optimizing hyperparameters for multiple classifiers, ensuring data preprocessing consistency, and balancing computational efficiency with model interpretability. The results underscore the pivotal contribution of advanced ML approaches in revolutionizing the management of cardiovascular attack. By addressing the complexities and variabilities inherent in heart attack prediction, the work provides a pathway towards more effective and personalized cardiovascular disease management strategies, demonstrating the transformative potential of ML in healthcare.
[...] Read more.Arrhythmias are irregularities in heartbeats and hence accurate classification of arrhythmia has great importance for administering patients to the right cardiac care. This paper presents a five-class arrhythmia classification framework using Encoded Transformer (ET) based Convolutional Neural Network and Long Short-Term Memory (CNN-ET-LSTM) hybrid model to ECG signal. The dataset used in this research is the widely used MIT-BIH arrhythmia database that has five distinct types of arrhythmia: non-ectopic beats (N), ventricular ectopic beats (V), supraventricular ectopic beats (S), fusion beats (F), and unknown beats (Q). The class imbalance problem is dealt by utilizing Synthetic Minority Oversampling Technique (SMOTE) that has an impact for bettering the performance especially on minority classes. In the proposed CNN-ET-LSTM model, the CNN is used as a feature extractor and the long range dependencies in the ECG waveform are captured by the encoded transformer module. The LSTM layers are used to processes features sequentially to feed them to the fully connected layers for classification. Experimental results showed that the proposed system achieved an accuracy of 97.52%, precision of 97.80%, recall of 97.52% and F1-score of 97.62% with raw blind test data. The performance of our model is also compared to other existing methods that use the same dataset and found useful for clinical applications.
[...] Read more.Early detection of diseases and pests is vital for proper cultivation of in-demand, high-quality crops, such as cacao. With the emergence of recent trends in technology, artificial intelligence has made disease diagnosis and classification more convenient and non-invasive with the applications of image processing, machine learning, and neural networks. This study presented an alternative approach for developing a cacao pod disease classifier using hybrid machine learning models, integrating the capabilities of convolutional neural networks and support vector machines. Convolutional neural networks were employed to extract complex and high-level features from images, breaking the restrictions of conventional image processing techniques in capturing intricate patterns and details. Support vector machines, on the other hand, excel at differentiating between classes by effectively utilizing distinctive numerical parameters representing datasets with clear interclass differences. Raw images of cacao pods were utilized as inputs for extracting relevant parameters that will distinguish three distinct classes: healthy, black pod rot diseased, and pod borer infested. For visual feature extraction, four convolutional neural network architectures were considered: AlexNet, ResNet50, DenseNet201, and MobileNetV2. The outputs of the fully connected layers of the neural networks were used as references for training the support vector machine classifier with the consideration of linear, quadratic, cubic, and Gaussian kernel functions. Among every hybrid pair, the DenseNet201 – Cubic Kernel Support Vector Machine attained the highest testing accuracy of 98.4%. The model even outperformed two pre-existing systems focused on the same application, with corresponding accuracies of 91.79% and 94%, respectively. Thus, it posed an improved, non-invasive method for detecting black pod rot disease or pod borer infestation in cacao pods.
[...] Read more.This study presents a deep learning-based approach to automated resume and job matching that uses semantic similarity between texts. The solution is based on SimCSE RoBERTa transformer embeddings and a Siamese neural architecture trained using the MSELoss loss function. Unlike traditional filtering systems by keywords or characteristics, the proposed model learns to place semantically compatible pairs (resume-vacancy) in a common vector space. Unlike traditional keyword-based or attributive matching systems, our method is designed to capture deep semantic alignment between resumes and job descriptions. To evaluate the effectiveness of this architecture, we conducted extensive experiments on a labelled dataset of over 7,000 resume–vacancy pairs obtained from the HuggingFace repository. The dataset includes three classes (Good Fit, Potential Fit, No Fit), which we restructured into a binary classification task. Annotation labels reflect textual compatibility based on skills, responsibilities, and experience, ensuring task relevance.
It resulted in a moderately imbalanced dataset with approximately 66% positive and 34% negative examples. Labels were assigned based on semantic compatibility, including skill match, job responsibilities, and experience alignment. Our model achieved accuracy = 72%, precision = 70%, recall = 74%, F1-score = 72%, and Precision@10 = 75%, significantly outperforming both classical (TF-IDF + cosine similarity) and neural (Sentence-BERT without fine-tuning) baselines. These results validate the empirical effectiveness of our architecture for candidate ranking and selection. To justify the use of a complex Siamese architecture, the system was compared to two baselines: (1) a classical TF-IDF + cosine similarity method, and (2) a pretrained Sentence-BERT model without task-specific fine-tuning. The proposed model significantly outperformed both baselines across all evaluation metrics, confirming that its complexity translates to meaningful performance gains. A basic self-learning mechanism is implemented and functional. Recruiters can provide binary feedback (Fit / No Fit) for each recommended candidate, which is stored in a feedback table. This feedback can be used to retrain or fine-tune the model periodically, enabling adaptive behaviour over time. While initial retraining experiments were conducted offline, full automation and continuous integration of feedback into training pipelines remain a goal for future development. The system offers sub-5-second response times, integration with vector databases, and a web-based user interface. It is designed for use in HR departments, recruiting agencies, and employment platforms, with potential for broader commercial deployment and domain adaptation. We additionally implemented a feedback-driven retraining loop that enables future self-supervised adaptation. While UI and vector retrieval infrastructure were developed to support prototyping and deployment, the primary research innovation centres on the modelling framework, learning setup, and comparative evaluation methodology. This work contributes to the advancement of semantically-aware intelligent recruiting systems and offers a replicable baseline for future studies in neural recommendation for HR applications. The risks of algorithmic bias are emphasised separately: even in the absence of obvious demographic characteristics in the input data, the model can implicitly reproduce social or historical inequalities inherent in the data. In this regard, the study outlines areas for further development, in particular equity auditing, bias reduction techniques, and the integration of human validation in decision-making.
Cyberbullying is an intentional action of harassment along the complex domain of social media utilizing information technology online. This research experimented unsupervised associative approach on text mining technique to automatically find cyberbullying words, patterns and extract association rules from a collection of tweets based on the domain / frequent words. Furthermore, this research identifies the relationship between cyberbullying keywords with other cyberbullying words, thus generating knowledge discovery of different cyberbullying word patterns from unstructured tweets. The study revealed that the type of dominant frequent cyberbullying words are intelligence, personality, and insulting words that describe the behavior, appearance of the female victims and sex related words that humiliate female victims. The results of the study suggest that we can utilize unsupervised associative approached in text mining to extract important information from unstructured text. Further, applying association rules can be helpful in recognizing the relationship and meaning between keywords with other words, therefore generating knowledge discovery of different datasets from unstructured text.
[...] Read more.The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.
[...] Read more.Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.
[...] Read more.Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.
[...] Read more.The proliferation of Web-enabled devices, including desktops, laptops, tablets, and mobile phones, enables people to communicate, participate and collaborate with each other in various Web communities, viz., forums, social networks, blogs. Simultaneously, the enormous amount of heterogeneous data that is generated by the users of these communities, offers an unprecedented opportunity to create and employ theories & technologies that search and retrieve relevant data from the huge quantity of information available and mine for opinions thereafter. Consequently, Sentiment Analysis which automatically extracts and analyses the subjectivities and sentiments (or polarities) in written text has emerged as an active area of research. This paper previews and reviews the substantial research on the subject of sentiment analysis, expounding its basic terminology, tasks and granularity levels. It further gives an overview of the state- of – art depicting some previous attempts to study sentiment analysis. Its practical and potential applications are also discussed, followed by the issues and challenges that will keep the field dynamic and lively for years to come.
[...] Read more.Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.
[...] Read more.Addressing scheduling problems with the best graph coloring algorithm has always been very challenging. However, the university timetable scheduling problem can be formulated as a graph coloring problem where courses are represented as vertices and the presence of common students or teachers of the corresponding courses can be represented as edges. After that, the problem stands to color the vertices with lowest possible colors. In order to accomplish this task, the paper presents a comparative study of the use of graph coloring in university timetable scheduling, where five graph coloring algorithms were used: First Fit, Welsh Powell, Largest Degree Ordering, Incidence Degree Ordering, and DSATUR. We have taken the Military Institute of Science and Technology, Bangladesh as a test case. The results show that the Welsh-Powell algorithm and the DSATUR algorithm are the most effective in generating optimal schedules. The study also provides insights into the limitations and advantages of using graph coloring in timetable scheduling and suggests directions for future research with the use of these algorithms.
[...] Read more.Along with the growth of the Internet, social media usage has drastically expanded. As people share their opinions and ideas more frequently on the Internet and through various social media platforms, there has been a notable rise in the number of consumer phrases that contain sentiment data. According to reports, cyberbullying frequently leads to severe emotional and physical suffering, especially in women and young children. In certain instances, it has even been reported that sufferers attempt suicide. The bully may occasionally attempt to destroy any proof they believe to be on their side. Even if the victim gets the evidence, it will still be a long time before they get justice at that point. This work used OCR, NLP, and machine learning to detect cyberbullying in photos in order to design and execute a practical method to recognize cyberbullying from images. Eight classifier techniques are used to compare the accuracy of these algorithms against the BoW Model and the TF-IDF, two key features. These classifiers are used to understand and recognize bullying behaviors. Based on testing the suggested method on the cyberbullying dataset, it was shown that linear SVC after OCR and logistic regression perform better and achieve the best accuracy of 96 percent. This study aid in providing a good outline that shapes the methods for detecting online bullying from a screenshot with design and implementation details.
[...] Read more.In this paper, a new acquisition protocol is adopted for identifying individuals from electroencephalogram signals based on eye blinking waveforms. For this purpose, a database of 10 subjects is collected using Neurosky Mindwave headset. Then, the eye blinking signal is extracted from brain wave recordings and used for the identification task. The feature extraction stage includes fitting the extracted eye blinks to auto-regressive model. Two algorithms are implemented for auto-regressive modeling namely; Levinson-Durbin and Burg algorithms. Then, discriminant analysis is adopted for classification scheme. Linear and quadratic discriminant functions are tested and compared in this paper. Using Burg algorithm with linear discriminant analysis, the proposed system can identify subjects with best accuracy of 99.8%. The obtained results in this paper confirm that eye blinking waveform carries discriminant information and is therefore appropriate as a basis for person identification methods.
[...] Read more.Climate change, a significant and lasting alteration in global weather patterns, is profoundly impacting the stability and predictability of global temperature regimes. As the world continues to grapple with the far-reaching effects of climate change, accurate and timely temperature predictions have become pivotal to various sectors, including agriculture, energy, public health and many more. Crucially, precise temperature forecasting assists in developing effective climate change mitigation and adaptation strategies. With the advent of machine learning techniques, we now have powerful tools that can learn from vast climatic datasets and provide improved predictive performance. This study delves into the comparison of three such advanced machine learning models—XGBoost, Support Vector Machine (SVM), and Random Forest—in predicting daily maximum and minimum temperatures using a 45-year dataset of Visakhapatnam airport. Each model was rigorously trained and evaluated based on key performance metrics including training loss, Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), R2 score, Mean Absolute Percentage Error (MAPE), and Explained Variance Score. Although there was no clear dominance of a single model across all metrics, SVM and Random Forest showed slightly superior performance on several measures. These findings not only highlight the potential of machine learning techniques in enhancing the accuracy of temperature forecasting but also stress the importance of selecting an appropriate model and performance metrics aligned with the requirements of the task at hand. This research accomplishes a thorough comparative analysis, conducts a rigorous evaluation of the models, highlights the significance of model selection.
[...] Read more.Cyberbullying is an intentional action of harassment along the complex domain of social media utilizing information technology online. This research experimented unsupervised associative approach on text mining technique to automatically find cyberbullying words, patterns and extract association rules from a collection of tweets based on the domain / frequent words. Furthermore, this research identifies the relationship between cyberbullying keywords with other cyberbullying words, thus generating knowledge discovery of different cyberbullying word patterns from unstructured tweets. The study revealed that the type of dominant frequent cyberbullying words are intelligence, personality, and insulting words that describe the behavior, appearance of the female victims and sex related words that humiliate female victims. The results of the study suggest that we can utilize unsupervised associative approached in text mining to extract important information from unstructured text. Further, applying association rules can be helpful in recognizing the relationship and meaning between keywords with other words, therefore generating knowledge discovery of different datasets from unstructured text.
[...] Read more.Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.
[...] Read more.Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.
[...] Read more.Addressing scheduling problems with the best graph coloring algorithm has always been very challenging. However, the university timetable scheduling problem can be formulated as a graph coloring problem where courses are represented as vertices and the presence of common students or teachers of the corresponding courses can be represented as edges. After that, the problem stands to color the vertices with lowest possible colors. In order to accomplish this task, the paper presents a comparative study of the use of graph coloring in university timetable scheduling, where five graph coloring algorithms were used: First Fit, Welsh Powell, Largest Degree Ordering, Incidence Degree Ordering, and DSATUR. We have taken the Military Institute of Science and Technology, Bangladesh as a test case. The results show that the Welsh-Powell algorithm and the DSATUR algorithm are the most effective in generating optimal schedules. The study also provides insights into the limitations and advantages of using graph coloring in timetable scheduling and suggests directions for future research with the use of these algorithms.
[...] Read more.Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.
[...] Read more.Along with the growth of the Internet, social media usage has drastically expanded. As people share their opinions and ideas more frequently on the Internet and through various social media platforms, there has been a notable rise in the number of consumer phrases that contain sentiment data. According to reports, cyberbullying frequently leads to severe emotional and physical suffering, especially in women and young children. In certain instances, it has even been reported that sufferers attempt suicide. The bully may occasionally attempt to destroy any proof they believe to be on their side. Even if the victim gets the evidence, it will still be a long time before they get justice at that point. This work used OCR, NLP, and machine learning to detect cyberbullying in photos in order to design and execute a practical method to recognize cyberbullying from images. Eight classifier techniques are used to compare the accuracy of these algorithms against the BoW Model and the TF-IDF, two key features. These classifiers are used to understand and recognize bullying behaviors. Based on testing the suggested method on the cyberbullying dataset, it was shown that linear SVC after OCR and logistic regression perform better and achieve the best accuracy of 96 percent. This study aid in providing a good outline that shapes the methods for detecting online bullying from a screenshot with design and implementation details.
[...] Read more.The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.
[...] Read more.Alzheimer’s illness is an ailment of mind which results in mental confusion, forgetfulness and many other mental problems. It effects physical health of a person too. When treating a patient with Alzheimer's disease, a proper diagnosis is crucial, especially into earlier phases of condition as when patients are informed of the risk of the disease, they can take preventative steps before irreparable brain damage occurs. The majority of machine detection techniques are constrained by congenital (present at birth) data, however numerous recent studies have used computers for Alzheimer's disease diagnosis. The first stages of Alzheimer's disease can be diagnosed, but illness itself cannot be predicted since prediction is only helpful before it really manifests. Alzheimer’s has high risk symptoms that effects both physical and mental health of a patient. Risks include confusion, concentration difficulties and much more, so with such symptoms it becomes important to detect this disease at its early stages. Significance of detecting this disease is the patient gets a better chance of treatment and medication. Hence our research helps to detect the disease at its early stages. Particularly when used with brain MRI scans, deep learning has emerged as a popular tool for the early identification of AD. Here we are using a 12- layer CNN that has the layers four convolutional, two pooling, two flatten, one dense and three activation functions. As CNN is well-known for pattern detection and image processing, here, accuracy of our model is 97.80%.
[...] Read more.Cloud computing refers to a sophisticated technology that deals with the manipulation of data in internet-based servers dynamically and efficiently. The utilization of the cloud computing has been rapidly increased because of its scalability, accessibility, and incredible flexibility. Dynamic usage and process sharing facilities require task scheduling which is a prominent issue and plays a significant role in developing an optimal cloud computing environment. Round robin is generally an efficient task scheduling algorithm that has a powerful impact on the performance of the cloud computing environment. This paper introduces a new approach for round robin based task scheduling algorithm which is suitable for cloud computing environment. The proposed algorithm determines time quantum dynamically based on the differences among three maximum burst time of tasks in the ready queue for each round. The concerning part of the proposed method is utilizing additive manner among the differences, and the burst times of the processes during determining the time quantum. The experimental results showed that the proposed approach has enhanced the performance of the round robin task scheduling algorithm in reducing average turn-around time, diminishing average waiting time, and minimizing number of contexts switching. Moreover, a comparative study has been conducted which showed that the proposed approach outperforms some of the similar existing round robin approaches. Finally, it can be concluded based on the experiment and comparative study that the proposed dynamic round robin scheduling algorithm is comparatively better, acceptable and optimal for cloud environment.
[...] Read more.Timing-critical path analysis is one of the most significant terms for the VLSI designer. For the formal verification of any kinds of digital chip, static timing analysis (STA) plays a vital role to check the potentiality and viability of the design procedures. This indicates the timing status between setup and holding times required with respect to the active edge of the clock. STA can also be used to identify time sensitive paths, simulate path delays, and assess Register transfer level (RTL) dependability. Four types of Static Random Access Memory (SRAM) controllers in this paper are used to handle with the complexities of digital circuit timing analysis at the logic level. Different STA parameters such as slack, clock skew, data latency, and multiple clock frequencies are investigated here in their node-to-node path analysis for diverse SRAM controllers. Using phase lock loop (ALTPLL), single clock and dual clock are used to get the response of these controllers. For four SRAM controllers, the timing analysis shows that no data violation exists for single and dual clock with 50 MHz and 100 MHz frequencies. Result also shows that the slack for 100MHz is greater than that of 50MHz. Moreover, the clock skew value in our proposed design is lower than in the other three controllers because number of paths, number of states are reduced, and the slack value is higher than in 1st and 2nd controllers. In timing path analysis, slack time determines that the design is working at the desired frequency. Although 100MHz is faster than 50MHz, our proposed SRAM controller meets the timing requirements for 100MHz including the reduction of node to node data delay. Due to this reason, the proposed controller performs well compared to others in terms slack and clock skew.
[...] Read more.