ISSN: 2310-9025 (Print)
ISSN: 2310-9033 (Online)
DOI: https://doi.org/10.5815/ijmsc
Website: https://www.mecs-press.org/ijmsc
Published By: MECS Press
Frequency: 4 issues per year
Number(s) Available: 42
IJMSC is committed to bridge the theory and practice of mathematical sciences and computing. IJMSC publishes original, peer-reviewed, and high quality articles in the areas of mathematical sciences and computing. IJMSC is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of mathematical sciences and computing applications.
IJMSC has been abstracted or indexed by several world class databases:Google Scholar, Microsoft Academic Search, CrossRef, CNKI, Baidu Wenku, JournalTOCs, etc..
IJMSC Vol. 11, No. 1, Apr. 2025
REGULAR PAPERS
Double-black (DB) nodes have no place in red-black (RB) trees. So when DB nodes are formed, they are immediately removed. The removal of DB nodes that cause rotation and recoloring of other connected nodes poses greater challenges in the teaching and learning of RB trees. To ease this difficulty, this paper extends our previous work on the symbolic arithmetic algebraic (SA) method for removing DB nodes. The SA operations that are given as, Red + Black = Black; Black - Black = Red; Black + Black = DB; and DB – Black = Black removes DB nodes and rebalances black heights in RB trees. By extension, this paper projects three SA mathematical equations, namely, general symbolic arithmetic rule, ∆_([DB,r,p]); partial symbolic arithmetic rule1, ∂_([DB,p])^'; and partial symbolic arithmetic rule2, ∂_([r])^''. The removal of a DB node ultimately affects black heights in RB trees. To balance black heights using the SA equations, all the RB tree cases, namely, LR, RL, LL, and RR, were considered in this work; and the position of the nodes connected directly or indirectly to the DB node was also tested. In this study, to balance a RB tree, the issues considered w.r.t. the different cases of the RB tree were i) whether a DB node has an inner, outer, or both inner and outer black nephews; or ii) ) whether a DB node has an inner, outer or both inner and outer red nephews. The nephews r and x in this work are the children of the sibling s to a DB, and further up the tree, the parent p of a DB is their grandparent g. Thus, r and x have indirect relationships to a DB at the point of formation of the DB node. The novelty of the SA equations is in their effectiveness in the removal of DB that involves rotation of nodes as well as the recoloring of nodes along any simple path so as to balance black heights in a tree. Our SA methods assert when, where, and how to remove a DB node and the nodes to recolor. As shown in this work, the SA algorithms have been demonstrated to be faster in performance w.r.t. to the number of steps taken to balance a RB tree when compared to the traditional RB algorithm for DB removal. The simplified and systematic approach of the SA methods has enhanced student learning and understanding of DB node removal in RB trees.
[...] Read more.There are many complex issues with incomplete data to make decisions in the field of computer science. These issues can be resolved with the aid of mathematical instruments. When dealing with incomplete data, rough set theory is a useful technique. In the classical rough set theory the information granules are equivalence classes. However, in real life scenario tolerance relations play a major role. By employing rough sets with Maximal Compatibility Blocks (MCBs) rather than equivalence classes, we were able to handle the challenges in this research with ease. A novel approach to define matrices on MCBs and operations on them is proposed. Additionally, applied the rough matrix approach to locate a consistent block related to any set in the universal set.
[...] Read more.Standard collocation (SCM) and perturbed collocation (PCM) are utilized as effective numerical techniques for solving fractional-order differential equations (FODEs) which focus on constructing orthogonal polynomials to serve as basis functions for approximating the solutions to these equations. The approach began by assuming an approximate solution, expressed in the constructed orthogonal polynomials. These assumed solutions were then substituted into the original FODEs. Following this, the problem was converted into a system of algebraic linear equations by collocating the equations at evenly spaced interior points. Numerical examples and the results indicated that the SCM and PCM are easy, efficient, and in good agreement compared with some existing methods and the results presented in the tables and graphs unequivocally demonstrate the efficacy of the proposed methods in solving fractional-order differential equations, yielding solutions of remarkable accuracy. However, the SCM and PCM exhibit comparable accuracy, making it difficult to identify a single superior approach, we conclude that both the proposed methods are effective and viable options for solving fractional order differential equations.
[...] Read more.We shall introduce the notion of pseudo – complemented semigroup, which is a natural generalization of the notion of pseudo- complemented semi-lattices, and give certain properties of such semi-groups. We also introduced the notion of Baer- Stone semigroup, which is Pseudo complimented semigroup satisfy certain additional properties.
[...] Read more.This paper aims at providing in-depth refinement to switching time-variant autoregressive processes via the mode as a stable location parameter in adopted noisy Fisher’s z-distribution that was impelled in a Bayesian setting. Explicitly, a four-parameter Fisher’s z-distribution of Bayesian Mixture Autoregressive (FZBMAR) process was proposed to congruous k-mixture components of Fisher’s z-switching mixture autoregressive processes that was based on shifting number of modes in the marginal density of any switching time-variant series of interest. The proposed FZBMAR process was not only used to seize what is term “most likely mode value” of the present conditional modal distribution given the immediate past but was also used to capture the conditional modal distribution of the observations given the immediate past that can either be perceived as an asymmetric or symmetric distributed varieties. The proposed FZBMAR process was compared with the existing Student-t Mixture Autoregressive (StMAR) and Gaussian Mixture Autoregressive (GMAR) processes with the demonstration of monthly average share prices (stock prices) of sixteen (16) swaying European economies. Based on the findings, the FZBMAR process outperformed the existing StMAR and GMAR processes in explaining the sixteen (16) swaying European economies share prices via a minimum Pareto-Smoothed Important Sampling Leave-One-Out Cross-Validation (PSIS-LOO) error process performance in comparison with AIC, HQIC by the latters. The same singly truncated student-t prior distribution was adopted for the noisy adoption of Fisher’s z hyper-parameters and the embedded autoregressive coefficients in the proposed FZBMAR process; such that their resulting posterior distributions gave the same singly truncated student-t distribution (conjugate) with an embedded Gamma variate.
[...] Read more.With the reform of Chinese economic system, the development of enterprises is facing many risks and challenges. In order to understand the state of operation of enterprises, it is necessary to apply relevant methods to evaluate the enterprise performance. Taking Industrial and Commercial Bank of China as an example, this paper selects its financial data from 2018 to 2021. Firstly, DuPont analysis is applied to decompose the return on equity into the product of profit margin on sales, total assets turnover ratio and equity multiplier. Then analyzes the effect of the changes of these three factors on the return on equity respectively by using the Chain substitution method. The results show that the effect of profit margin on sales on return on equity decreases year by year and tends to be positive from negative. The effect of total assets turnover ratio on return on equity changes from positive to negative and then to positive, while the effect of equity multiplier is opposite. These results provide a direction for the adjustment of the return on equity of Industrial and Commercial Bank of China. Finally, according to the results, some suggestions are put forward for the development of Industrial and Commercial Bank of China.
[...] Read more.The process of making decisions on software architecture is the greatest significance for the achievement of a software system's success. Software architecture establishes the framework of the system, specifies its characteristics, and has significant and major effects across the whole life cycle of the system. The complicated characteristics of the software development context and the significance of the problem have caused the research community to build various methodologies focused on supporting software architects to improve their decision-making abilities. With these efforts, the implementation of such systematic methodologies looks to be somewhat constrained in practical application. Moreover, the decision-makers must overcome unexpected difficulties due to the varying software development processes that propose distinct approaches for architecture design. The understanding of these design approaches helps to develop the architectural design framework. In the area of software architecture, a significant change has occurred wherein the focus has shifted from primarily identifying the result of the architecting process, which was primarily expressed through the representation of components and connectors, to the documentation of architectural design decisions and the underlying reasoning behind them. This shift finally concludes in the creation of an architectural design framework. So, a correct decision- making approach is needed to design the software architecture. The present study analyzes the design decisions and proposes a new design decision model for the software architecture. This study introduces a new approach to the decision-making model, wherein software architecture design is viewed based on specific decisions.
[...] Read more.In the software development industry, ensuring software quality holds immense significance due to its direct influence on user satisfaction, system reliability, and overall end-users. Traditionally, the development process involved identifying and rectifying defects after the implementation phase, which could be time-consuming and costly. Determining software development methodologies, with a specific emphasis on Test-Driven Development, aims to evaluate its effectiveness in improving software quality. The study employs a mixed-methods approach, combining quantitative surveys and qualitative interviews to comprehensively investigate the impact of Test-Driven Development on various facets of software quality. The survey findings unveil that Test-Driven Development offers substantial benefits in terms of early defect detection, leading to reduced costs and effort in rectifying issues during the development process. Moreover, Test-Driven Development encourages improved code design and maintainability, fostering the creation of modular and loosely coupled code structures. These results underscore the pivotal role of Test-Driven Development in elevating code quality and maintainability. Comparative analysis with traditional development methodologies highlights Test-Driven Development's effectiveness in enhancing software quality, as rated highly by respondents. Furthermore, it clarifies Test-Driven Development's positive impact on user satisfaction, overall product quality, and code maintainability. Challenges related to Test-Driven Development adoption are identified, such as the initial time investment in writing tests and difficulties adapting to changing requirements. Strategies to mitigate these challenges are proposed, contributing to the practical application of Test-Driven Development. Offers valuable insights into the efficacy of Test-Driven Development in enhancing software quality. It not only highlights the benefits of Test-Driven Development but also provides a framework for addressing challenges and optimizing its utilization. This knowledge is invaluable for software development teams, project managers, and quality assurance professionals, facilitating informed decisions regarding adopting and implementing Test-Driven Development as a quality assurance technique in software development.
[...] Read more.Cloud computing is a widely acceptable computing environment, and its services are also widely available. But the consumption of energy is one of the major issues of cloud computing as a green computing. Because many electronic resources like processing devices, storage devices in both client and server site and network computing devices like switches, routers are the main elements of energy consumption in cloud and during computation power are also required to cool the IT load in cloud computing. So due to the high consumption, cloud resources define the high energy cost during the service activities of cloud computing and contribute more carbon emissions to the atmosphere. These two issues inspired the cloud companies to develop such renewable cloud sustainability regulations to control the energy cost and the rate of CO2 emission. The main purpose of this paper is to develop a green computing environment through saving the energy of cloud resources using the specific approach of identifying the requirement of computing resources during the computation of cloud services. Only required computing resources remain ON (working state), and the rest become OFF (sleep/hibernate state) to reduce the energy uses in the cloud data centers. This approach will be more efficient than other available approaches based on cloud service scheduling or migration and virtualization of services in the cloud network. It reduces the cloud data center's energy usages by applying a power management scheme (ON/OFF) on computing resources. The proposed approach helps to convert the cloud computing in green computing through identifying an appropriate number of cloud computing resources like processing nodes, servers, disks and switches/routers during any service computation on cloud to handle the energy-saving or environmental impact.
[...] Read more.Since the inception of Blockchain, the computer database has been evolving into innovative technologies. Recent technologies emerge, the use of Blockchain is also flourishing. All the technologies from Blockchain use a mutual algorithm to operate. The consensus algorithm is the process that assures mutual agreements and stores information in the decentralized database of the network. Blockchain’s biggest drawback is the exposure to scalability. However, using the correct consensus for the relevant work can ensure efficiency in data storage, transaction finality, and data integrity. In this paper, a comparison study has been made among the following consensus algorithms: Proof of Work (PoW), Proof of Stake (PoS), Proof of Authority (PoA), and Proof of Vote (PoV). This study aims to provide readers with elementary knowledge about blockchain, more specifically its consensus protocols. It covers their origins, how they operate, and their strengths and weaknesses. We have made a significant study of these consensus protocols and uncovered some of their advantages and disadvantages in relation to characteristics details such as security, energy efficiency, scalability, and IoT (Internet of Things) compatibility. This information will assist future researchers to understand the characteristics of our selected consensus algorithms.
[...] Read more.Fog computing is extending cloud computing by transferring computation on the edge of networks such as mobile collaborative devices or fixed nodes with built-in data storage, computing, and communication devices. Fog gives focal points of enhanced proficiency, better security, organize data transfer capacity sparing and versatility. With a specific end goal to give imperative subtle elements of Fog registering, we propose attributes of this region and separate from cloud computing research. Cloud computing is developing innovation which gives figuring assets to a specific assignment on pay per utilize. Cloud computing gives benefit three unique models and the cloud gives shoddy; midway oversaw assets for dependable registering for performing required errands. This paper gives correlation and attributes both Fog and cloud computing differs by outline, arrangement, administrations and devices for associations and clients. This comparison shows that Fog provides more flexible infrastructure and better service of data processing by consuming low network bandwidth instead of shifting whole data to the cloud.
[...] Read more.Predicting human emotion from speech is now important research topic. One’s mental state can be understood by emotion. The proposed research work is emotion recognition from human speech. Proposed system plays significant role in recognizing emotion while someone is talking. It has a great use for smart home environment. One can understand the emotion of other who is in home or may be in other place. University, service center or hospital can get a valuable decision support system with this emotion prediction system. Features like-MFCC (Mel-Frequency Cepstral Coefficients) and LPC are extracted from audio sample signal. Audios are collected by recording speeches. A test also applied by combining self-collected dataset and popular Ravdees dataset. Self-collected dataset is named as ABEG. MFCC and LPC features are used in this study to train and test for predicting emotion. This study is made on angry, happy and neutral emotion classes. Different machine learning algorithms are applied here and result is compared with each other. Logistic regression performs well as compared to other ML algorithm.
[...] Read more.Security in digital communication is becoming more important as the number of systems is connected to the internet day by day. It is necessary to protect secret message during transmission over insecure channels of the internet. Thus, data security becomes an important research issue. Steganography is a technique that embeds secret information into a carrier such as images, audio files, text files, and video files so that it cannot be observed. In this paper, based on spatial domain, a new image steganography method is proposed to ensure the privacy of the digital data during transmission over the internet. In this method, least significant bit substitution is proposed where the information embedded in the random bit position of a random pixel location of the cover image using Pseudo Random Number Generator (PRNG). The proposed method used a 3-3-2 approach to hide a byte in a pixel of a 24 bit color image. The method uses Pseudo Random Number Generator (PRNG) in two different stages of embedding process. The first one is used to select random pixels and the second PRNG is used select random bit position into the R, G and B values of a pixel to embed one byte of information. Due to this randomization, the security of the system is expected to increase and the method achieves a very high maximum hiding capacity which signifies the importance of the proposed method.
[...] Read more.To measure the difference of two fuzzy sets / intuitionistic sets, we can use the distance measure and dissimilarity measure between fuzzy sets. Characterization of distance/dissimilarity measure between fuzzy sets/intuitionistic fuzzy set is important as it has application in different areas: pattern recognition, image segmentation, and decision making. Picture fuzzy set (PFS) is a generalization of fuzzy set and intuitionistic set, so that it have many application. In this paper, we introduce concepts: difference between PFS-sets, distance measure and dissimilarity measure between picture fuzzy sets, and also provide the formulas for determining these values. We also present an application of dissimilarity measures in multi-attribute decision making.
[...] Read more.Currently, every company is concerned about the retention of their staff. They are nevertheless unable to recognize the genuine reasons for their job resignations due to various circumstances. Each business has its approach to treating employees and ensuring their pleasure. As a result, many employees abruptly terminate their employment for no apparent reason. Machine learning (ML) approaches have grown in popularity among researchers in recent decades. It is capable of proposing answers to a wide range of issues. Then, using machine learning, you may generate predictions about staff attrition. In this research, distinct methods are compared to identify which workers are most likely to leave their organization. It uses two approaches to divide the dataset into train and test data: the 70 percent train, the 30 percent test split, and the K-Fold approaches. Cat Boost, LightGBM Boost, and XGBoost are three methods employed for accuracy comparison. These three approaches are accurately generated by using Gradient Boosting Algorithms.
[...] Read more.With the reform of Chinese economic system, the development of enterprises is facing many risks and challenges. In order to understand the state of operation of enterprises, it is necessary to apply relevant methods to evaluate the enterprise performance. Taking Industrial and Commercial Bank of China as an example, this paper selects its financial data from 2018 to 2021. Firstly, DuPont analysis is applied to decompose the return on equity into the product of profit margin on sales, total assets turnover ratio and equity multiplier. Then analyzes the effect of the changes of these three factors on the return on equity respectively by using the Chain substitution method. The results show that the effect of profit margin on sales on return on equity decreases year by year and tends to be positive from negative. The effect of total assets turnover ratio on return on equity changes from positive to negative and then to positive, while the effect of equity multiplier is opposite. These results provide a direction for the adjustment of the return on equity of Industrial and Commercial Bank of China. Finally, according to the results, some suggestions are put forward for the development of Industrial and Commercial Bank of China.
[...] Read more.Many different methods are applied and used in an attempt to solve higher order nonlinear boundary value problems (BVPs). Galerkin weighted residual method (GWRM) are widely used to solve BVPs. The main aim of this paper is to find the approximate solutions of fifth, seventh and ninth order nonlinear boundary value problems using GWRM. A trial function namely, Bezier Polynomials is assumed which is made to satisfy the given essential boundary conditions. Investigate the effectiveness of the current method; some numerical examples were considered. The results are depicted both graphically and numerically. The numerical solutions are in good agreement with the exact result and get a higher accuracy in the solutions. The present method is quit efficient and yields better results when compared with the existing methods. All problems are performed using the software MATLAB R2017a.
[...] Read more.Quantum computing is a computational framework based on the Quantum Mechanism, which has gotten a lot of attention in the past few decades. In comparison to traditional computers, it has achieved amazing performance on several specialized tasks. Quantum computing is the study of quantum computers that use quantum mechanics phenomena such as entanglement, superposition, annealing, and tunneling to solve problems that humans cannot solve in their lifetime. This article offers a brief outline of what is happening in the field of quantum computing, as well as the current state of the art. It also summarizes the features of quantum computing in terms of major elements such as qubit computation, quantum parallelism, and reverse computing. The study investigates the cause of a quantum computer's great computing capabilities by utilizing quantum entangled states. It also emphasizes that quantum computer research requires a combination of the most sophisticated sciences, such as computer technology, micro-physics, and advanced mathematics.
[...] Read more.Among other factors affecting face recognition and verification, the aging of individuals is a particularly challenging one. Unlike other factors such as pose, expression, and illumination, aging is uncontrollable, personalized, and takes place throughout human life. Thus, while the effects of factors such as head pose, illumination, and facial expression on face recognition can be minimized by using images from controlled environments, the effect of aging cannot be so controlled. This work exploits the personalized nature of aging to reduce the effect of aging on face recognition so that an individual can be correctly recognized across his/her different age-separated face images. To achieve this, an individualized face pairing method was developed in this work to pair faces against entire sets of faces grouped by individuals then, similarity score vectors are obtained for both matching and non-matching image-individual pairs, and the vectors are then used for age-invariant face recognition. This model has the advantage of being able to capture all possible face matchings (intra-class and inter-class) within a face dataset without having to compute all possible image-to-image pairs. This reduces the computational demand of the model without compromising the impact of the ageing factor on the identity of the human face. The developed model was evaluated on the publicly available FG-NET dataset, two subsets of the CACD dataset, and a locally obtained FAGE dataset using leave-one-person (LOPO) cross-validation. The model achieved recognition accuracies of 97.01%, 99.89%, 99.92%, and 99.53% respectively. The developed model can be used to improve face recognition models by making them robust to age-variations in individuals in the dataset.
[...] Read more.Currently, every company is concerned about the retention of their staff. They are nevertheless unable to recognize the genuine reasons for their job resignations due to various circumstances. Each business has its approach to treating employees and ensuring their pleasure. As a result, many employees abruptly terminate their employment for no apparent reason. Machine learning (ML) approaches have grown in popularity among researchers in recent decades. It is capable of proposing answers to a wide range of issues. Then, using machine learning, you may generate predictions about staff attrition. In this research, distinct methods are compared to identify which workers are most likely to leave their organization. It uses two approaches to divide the dataset into train and test data: the 70 percent train, the 30 percent test split, and the K-Fold approaches. Cat Boost, LightGBM Boost, and XGBoost are three methods employed for accuracy comparison. These three approaches are accurately generated by using Gradient Boosting Algorithms.
[...] Read more.An earlier research project that dealt with converting ASCII codes into 2D Cartesian coordinates and then applying translation and rotation transformations to construct an encryption system, is improved by this study. Here, we present a variation of the Cantor Pairing Function to convert ASCII values into distinctive 2D Coordinates. Then, we apply some novel methods to jumble the ciphertext generated as a result of the transformations. We suggest numerous improvements to the earlier research via simple tweaks in the existing code and by introducing a novel key generation protocol that generates an infinite integral key space with no decryption failures. The only way to break this protocol with no prior information would be brute force attack. With the help of elementary combinatorics and probability topics, we prove that this encryption protocol is seemingly infeasible to overcome by an unwelcome adversary.
[...] Read more.Forecasting is estimating the magnitude of uncertain future events and provides different results with different supposition. In order to identify the core data pattern of jute bale requirements for yarn production, we examined 10 years' worth of data from Jute Yarn/Twin that were shipped by their member mills Limited. Exponential smoothing and Holt’s methods are commonly used to forecast this output because it provides an adequate result. Selecting the right smoothing constant value is essential for reducing predicting errors. In this work, we created a method for choosing the smoothing constant's ideal value to reduce study errors measured by the mean square error (MSE), mean absolute deviation (MAD), and mean square percent error (MAPE). At the contrary, we discuss research finding result and future possibility so that Jute Mills Limited and similar companies may execute forecasting smoothly and develop the expertise level of the procurement system to stay competitive in the worldwide market.
[...] Read more.There exist numerous numerical methods for solving the initial value problems of ordinary differential equations. The accuracy level and computational time are not the same for all of these methods. In this article, the Modified Euler method has been discussed for solving and finding the accurate solution of Ordinary Differential Equations using different step sizes. Approximate Results obtained by different step sizes are shown using the result analysis table. Some problems are solved by the proposed method then approximated results are shown graphically compare to the exact solution for a better understanding of the accuracy level of this method. Errors are estimated for each step and are represented graphically using Matlab Programming Language and MS Excel, which reveals that so much small step size gives better accuracy with less computational error. It is observed that this method is suitable for obtaining the accurate solution of ODEs when the taken step sizes are too much small.
[...] Read more.Data mining and machine learning methods are important areas where studies have increased in recent years. Data is critical for these areas focus on inferring meaningful conclusions from the data collected. The preparation of the data is very important for the studies to be carried out and the algorithms to be applied. One of the most critical steps in data preparation is outlier detection. Because these observations, which have different characteristics from the observations in the data, affect the results of the algorithms to be applied and may cause erroneous results. New methods have been developed for outlier detection and machine learning and data mining algorithms have been provided with successful results with these methods. Algorithms such as Fuzzy C Means (FCM) and Self Organization Maps (SOM) have given successful results for outlier detection in this area. However, there is no outlier detection method in which these two powerful clustering methods are used together. This study proposes a new outlier detection algorithm using these two powerful clustering methods. In this study, a new outlier detection algorithm (FUSOMOUT) was developed by using SOM and FCM clustering methods together. With this algorithm, it is aimed to increase the success of both clustering and classification algorithms. The proposed algorithm was applied to four different datasets with different characteristics (Wisconsin breast cancer dataset (WDBC), Wine, Diabetes and Kddcup99) and it was shown to significantly increase the classification accuracy with the Silhouette, Calinski-Harabasz and Davies-Bouldin indexes as clustering success indexes.
[...] Read more.Cloud computing is a widely acceptable computing environment, and its services are also widely available. But the consumption of energy is one of the major issues of cloud computing as a green computing. Because many electronic resources like processing devices, storage devices in both client and server site and network computing devices like switches, routers are the main elements of energy consumption in cloud and during computation power are also required to cool the IT load in cloud computing. So due to the high consumption, cloud resources define the high energy cost during the service activities of cloud computing and contribute more carbon emissions to the atmosphere. These two issues inspired the cloud companies to develop such renewable cloud sustainability regulations to control the energy cost and the rate of CO2 emission. The main purpose of this paper is to develop a green computing environment through saving the energy of cloud resources using the specific approach of identifying the requirement of computing resources during the computation of cloud services. Only required computing resources remain ON (working state), and the rest become OFF (sleep/hibernate state) to reduce the energy uses in the cloud data centers. This approach will be more efficient than other available approaches based on cloud service scheduling or migration and virtualization of services in the cloud network. It reduces the cloud data center's energy usages by applying a power management scheme (ON/OFF) on computing resources. The proposed approach helps to convert the cloud computing in green computing through identifying an appropriate number of cloud computing resources like processing nodes, servers, disks and switches/routers during any service computation on cloud to handle the energy-saving or environmental impact.
[...] Read more.