ISSN: 2074-904X (Print)
ISSN: 2074-9058 (Online)
DOI: https://doi.org/10.5815/ijisa
Website: https://www.mecs-press.org/ijisa
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 128
ICV: 2014 7.09
SJR: 2019 0.241
IJISA is committed to bridge the theory and practice of intelligent systems. From innovative ideas to specific algorithms and full system implementations, IJISA publishes original, peer-reviewed, and high quality articles in the areas of intelligent systems. IJISA is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of intelligent systems and applications.
IJISA has been abstracted or indexed by several world class databases: Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
IJISA Vol. 15, No. 6, Dec. 2023
REGULAR PAPERS
In computational study and automatic recognition of opinions in free texts, certain words in sentences are used to decide its sentiments. While analysing each customer’s opinion per time in churn management will be effective for personalised recommendations. Oftentimes, the opinion is not sufficient for contextualised content mining. While personalised recommendations are time consuming, it also does not provide complete picture of an overall sentiment in the business community of customers. To help businesses identify widespread issues affecting a large segment of their customers towards engendering patterns and trends of different customer churn behaviour, here, we developed a clustered contextualised conversation as opinions set for integration with Roberta Model. The developed churn behavioural opinion clusters disambiguated short messages while charactering contents collectively based on context beyond keyword-based sentiment matching for effective mining. Based on the predicted opinion threshold, customer churn category for group-based personalised decision support was generated, with matching concepts. The baseline RoBERTa model on the contextually clustered opinions, trained with a batch size of 16, a learning rate of 2e-5, over 8 epochs, using a maximum sequence length of 128 and standard hyperparameters, achieved an accuracy of 92%, Precision of 88%, Recall of 86% and F1 score of 84% over a test set of 30%.
[...] Read more.Students of today are exposed to technologies that are either educationally effective or distractive. Most of them are having a hard time learning in a traditional classroom setup, are easily distracted, and have difficulty remembering lessons just learned and prerequisite skills needed in learning new lessons. Game-Based Intelligent Tutoring System (GB-ITS) is a technology that provides an individualized learning experience based on student’s learning needs. GB-ITS mimics a teacher doing one-on-one teaching, also known as tutoring, which is more cost-efficient than human tutors. This study developed a general-purpose Memory Enhancer Games system, in a form of a GB-ITS. This study was conducted at Calasiao Comprehensive National High School, identified the game type that best enhances memory and the game features for this proposed system through a questionnaire by (9) ICT teacher respondents. The developed system in this study has undergone validity testing by (8) ICT teachers and professors from Schools Division I of Pangasinan, and of a University in Dagupan City, and acceptability testing by (100) senior high school students of Calasiao Comprehensive National High School, 1st semester of school year 2022-2023, using Likert scale to determine its appropriateness as an intelligent learning tool. The results of the game design questionnaire confirmed the studies of which elements were ideal for a GB-ITS, and both the validity and acceptability survey questionnaires with overall weighted means of 4.57 and 4.08, show that the system is a valid and acceptable intelligent learning tool. The developed MEG can also be of use for testing game features for educational effectiveness and can also contribute to any future study which will conduct to test whether a general-purpose GBL or GB-ITS model would compare; if won’t equal the effectiveness of GBLs designed for delivering specific contents or subjects.
[...] Read more.Pong game is a simple but entertaining game of logic control. This research paper presents the design and implementation of an FPGA-based Pong game that runs on an Altera DE2 board using Verilog HDL. This article explains the VGA controller, object creation and animation, and text subsystem and of course how to link them all together to build a functioning circuit. There is an interesting multi-player mode and single-player mode feature in this design scheme. This game's multiplayer mode features both real-time and automatic players to create a competitive atmosphere. This design method followed less complicated, fastest processing, and utilized memory requirements and logic elements. The single-player mode uses 1.3% of total logic elements, the two-player mode uses 1.32%, and automatic player vs. real player uses 1.456% of total logic elements which is very small compared to the other gaming schemes and it reduces the processing time that is cost-effective for universal use. All the modules are designed by using Verilog HDL. The synthesis is done with the help of Altera DE2 FPGA. Functional simulation and synthesis prove that the design is universally usable and combines different modules in one module that presents sound entertainment and extends the electronics application-based work in the future.
[...] Read more.Agricultural development is a critical strategy for promoting prosperity and addressing the challenge of feeding nearly 10 billion people by 2050. Plant diseases can significantly impact food production, reducing both quantity and diversity. Therefore, early detection of plant diseases through automatic detection methods based on deep learning can improve food production quality and reduce economic losses. While previous models have been implemented for a single type of plant to ensure high accuracy, they require high-quality images for proper classification and are not effective with low-resolution images. To address these limitations, this paper proposes the use of pre-trained model based on convolutional neural networks (CNN) for plant disease detection. The focus is on fine-tuning the hyperparameters of popular pre-trained model such as EfficientNetV2S, to achieve higher accuracy in detecting plant diseases in lower resolution images, crowded and misleading backgrounds, shadows on leaves, different textures, and changes in brightness. The study utilized the Plant Diseases Dataset, which includes infected and uninfected crop leaves comprising 38 classes. In pursuit of improving the adaptability and robustness of our neural networks, we intentionally exposed them to a deliberately noisy training dataset. This strategic move followed the modification of the Plant Diseases Dataset, tailored to better suit the demands of our training process. Our objective was to enhance the network's ability to generalize effectively and perform robustly in real-world scenarios. This approach represents a critical step in our study's overarching goal of advancing plant disease detection, especially in challenging conditions, and underscores the importance of dataset optimization in deep learning applications.
[...] Read more.The task of path planning is extremely investigated in mobile robotics to determine a suitable path for the robot from the source point to the target point. The intended path should satisfy purposes such as collision-free, shortest-path, or power-saving. In the case of a mobile robot, many constraints should be considered during the selection of path planning algorithms such as static or dynamic environment and holonomic or non-holonomic robot. There is a pool of path-planning algorithms in the literature. However, Dijkstra is still one of the effective algorithms due to its simplicity and capabilities to compute single-source shortest-path to every position in the workspace. Researchers propose several versions of the Dijkstra algorithm, especially in mobile robotics. In this paper, we propose an improved approach based on the Dijkstra algorithm with a simple sampling method to sample the workspace to avoid an exhaustive search of the Dijkstra algorithm which consumes time and resources. The goal is to identify the same optimal shortest path resulting from the Dijkstra algorithm with minimum time and number of turns i.e., a smoothed path. The simulation results show that the proposed method improves the Dijkstra algorithm with respect to the running time and the number of turns of the mobile robot and outperforms the RRT algorithm concerning the path length.
[...] Read more.The Internet of Things (IoT) has extended the internet connectivity to reach not just computers and humans, but most of our environment things. The IoT has the potential to connect billions of objects simultaneously which has the impact of improving information sharing needs that result in improving our life. Although the IoT benefits are unlimited, there are many challenges facing adopting the IoT in the real world due to its centralized server/client model. For instance, scalability and security issues that arise due to the excessive numbers of IoT objects in the network. The server/client model requires all devices to be connected and authenticated through the server, which creates a single point of failure. Therefore, moving the IoT system into the decentralized path may be the right decision. One of the popular decentralization systems is blockchain. The Blockchain is a powerful technology that decentralizes computation and management processes which can solve many of IoT issues, especially security. This paper provides an overview of the integration of the blockchain with the IoT with highlighting the integration benefits and challenges. The future research directions of blockchain with IoT are also discussed. We conclude that the combination of blockchain and IoT can provide a powerful approach which can significantly pave the way for new business models and distributed applications.
[...] Read more.Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.
[...] Read more.Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.
[...] Read more.Along with the growth of the Internet, social media usage has drastically expanded. As people share their opinions and ideas more frequently on the Internet and through various social media platforms, there has been a notable rise in the number of consumer phrases that contain sentiment data. According to reports, cyberbullying frequently leads to severe emotional and physical suffering, especially in women and young children. In certain instances, it has even been reported that sufferers attempt suicide. The bully may occasionally attempt to destroy any proof they believe to be on their side. Even if the victim gets the evidence, it will still be a long time before they get justice at that point. This work used OCR, NLP, and machine learning to detect cyberbullying in photos in order to design and execute a practical method to recognize cyberbullying from images. Eight classifier techniques are used to compare the accuracy of these algorithms against the BoW Model and the TF-IDF, two key features. These classifiers are used to understand and recognize bullying behaviors. Based on testing the suggested method on the cyberbullying dataset, it was shown that linear SVC after OCR and logistic regression perform better and achieve the best accuracy of 96 percent. This study aid in providing a good outline that shapes the methods for detecting online bullying from a screenshot with design and implementation details.
[...] Read more.Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.
[...] Read more.Alzheimer’s illness is an ailment of mind which results in mental confusion, forgetfulness and many other mental problems. It effects physical health of a person too. When treating a patient with Alzheimer's disease, a proper diagnosis is crucial, especially into earlier phases of condition as when patients are informed of the risk of the disease, they can take preventative steps before irreparable brain damage occurs. The majority of machine detection techniques are constrained by congenital (present at birth) data, however numerous recent studies have used computers for Alzheimer's disease diagnosis. The first stages of Alzheimer's disease can be diagnosed, but illness itself cannot be predicted since prediction is only helpful before it really manifests. Alzheimer’s has high risk symptoms that effects both physical and mental health of a patient. Risks include confusion, concentration difficulties and much more, so with such symptoms it becomes important to detect this disease at its early stages. Significance of detecting this disease is the patient gets a better chance of treatment and medication. Hence our research helps to detect the disease at its early stages. Particularly when used with brain MRI scans, deep learning has emerged as a popular tool for the early identification of AD. Here we are using a 12- layer CNN that has the layers four convolutional, two pooling, two flatten, one dense and three activation functions. As CNN is well-known for pattern detection and image processing, here, accuracy of our model is 97.80%.
[...] Read more.There is an increase in death rate yearly as a result of heart diseases. One of the major factors that cause this increase is misdiagnoses on the part of medical doctors or ignorance on the part of the patient. Heart diseases can be described as any kind of disorder that affects the heart. In this research work, causes of heart diseases, the complications and the remedies for the diseases have been considered. An intelligent system which can diagnose heart diseases has been implemented. This system will prevent misdiagnosis which is the major error that may occur by medical doctors. The dataset of statlog heart disease has been used to carry out this experiment. The dataset comprises attributes of patients diagnosed for heart diseases. The diagnosis was used to confirm whether heart disease is present or absent in the patient. The datasets were obtained from the UCI Machine Learning. This dataset was divided into training, validation set and testing set, to be fed into the network. The intelligent system was modeled on feed forward multilayer perceptron, and support vector machine. The recognition rate obtained from these models were later compared to ascertain the best model for the intelligent system due to its significance in medical field. The results obtained are 85%, 87.5% for feedforward multilayer perceptron, and support vector machine respectively. From this experiment we discovered that support vector machine is the best network for the diagnosis of heart disease.
[...] Read more.The proliferation of Web-enabled devices, including desktops, laptops, tablets, and mobile phones, enables people to communicate, participate and collaborate with each other in various Web communities, viz., forums, social networks, blogs. Simultaneously, the enormous amount of heterogeneous data that is generated by the users of these communities, offers an unprecedented opportunity to create and employ theories & technologies that search and retrieve relevant data from the huge quantity of information available and mine for opinions thereafter. Consequently, Sentiment Analysis which automatically extracts and analyses the subjectivities and sentiments (or polarities) in written text has emerged as an active area of research. This paper previews and reviews the substantial research on the subject of sentiment analysis, expounding its basic terminology, tasks and granularity levels. It further gives an overview of the state- of – art depicting some previous attempts to study sentiment analysis. Its practical and potential applications are also discussed, followed by the issues and challenges that will keep the field dynamic and lively for years to come.
[...] Read more.Water supply infrastructure operational efficiency has a direct impact on the quantity of portable water available to end users. It is commonplace to find water supply infrastructure in a declining operational state in rural and some urban centers in developing countries. Maintenance issues result in unabated wastage and shortage of supply to users. This work proposes a cost-effective solution to the problem of water distribution losses using a Microcontroller-based digital control method and Machine Learning (ML) to forecast and manage portable water production and system maintenance. A fundamental concept of hydrostatic pressure equilibrium was used for the detection and control of leakages from pipeline segments. The results obtained from the analysis of collated data show a linear direct relationship between water distribution loss and production quantity; an inverse relationship between Mean Time Between Failure (MTBF) and yearly failure rates, which are the key problem factors affecting water supply efficiency and availability. Results from the prototype system test show water supply efficiency of 99% as distribution loss was reduced to 1% due to Line Control Unit (LCU) installed on the prototype pipeline. Hydrostatic pressure equilibrium being used as the logic criteria for leak detection and control indeed proved potent for significant efficiency improvement in the water supply infrastructure.
[...] Read more.Cyberbullying is an intentional action of harassment along the complex domain of social media utilizing information technology online. This research experimented unsupervised associative approach on text mining technique to automatically find cyberbullying words, patterns and extract association rules from a collection of tweets based on the domain / frequent words. Furthermore, this research identifies the relationship between cyberbullying keywords with other cyberbullying words, thus generating knowledge discovery of different cyberbullying word patterns from unstructured tweets. The study revealed that the type of dominant frequent cyberbullying words are intelligence, personality, and insulting words that describe the behavior, appearance of the female victims and sex related words that humiliate female victims. The results of the study suggest that we can utilize unsupervised associative approached in text mining to extract important information from unstructured text. Further, applying association rules can be helpful in recognizing the relationship and meaning between keywords with other words, therefore generating knowledge discovery of different datasets from unstructured text.
[...] Read more.Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.
[...] Read more.Non-functional requirements define the quality attribute of a software application, which are necessary to identify in the early stage of software development life cycle. Researchers proposed automatic software Non-functional requirement classification using several Machine Learning (ML) algorithms with a combination of various vectorization techniques. However, using the best combination in Non-functional requirement classification still needs to be clarified. In this paper, we examined whether different combinations of feature extraction techniques and ML algorithms varied in the non-functional requirements classification performance. We also reported the best approach for classifying Non-functional requirements. We conducted the comparative analysis on a publicly available PROMISE_exp dataset containing labelled functional and Non-functional requirements. Initially, we normalized the textual requirements from the dataset; then extracted features through Bag of Words (BoW), Term Frequency and Inverse Document Frequency (TF-IDF), Hashing and Chi-Squared vectorization methods. Finally, we executed the 15 most popular ML algorithms to classify the requirements. The novelty of this work is the empirical analysis to find out the best combination of ML classifier with appropriate vectorization technique, which helps developers to detect Non-functional requirements early and take precise steps. We found that the linear support vector classifier and TF-IDF combination outperform any combinations with an F1-score of 81.5%.
[...] Read more.This article presents a new approach for image recognition that proposes to combine Conical Radon Transform (CRT) and Convolutional Neural Networks (CNN).
In order to evaluate the performance of this approach for pattern recognition task, we have built a Radon descriptor enhancing features extracted by linear, circular and parabolic RT. The main idea consists in exploring the use of Conic Radon transform to define a robust image descriptor. Specifically, the Radon transformation is initially applied on the image. Afterwards, the extracted features are combined with image and then entered as an input into the convolutional layers. Experimental evaluation demonstrates that our descriptor which joins together extraction of features of different shapes and the convolutional neural networks achieves satisfactory results for describing images on public available datasets such as, ETH80, and FLAVIA. Our proposed approach recognizes objects with an accuracy of 96 % when tested on the ETH80 dataset. It also has yielded competitive accuracy than state-of-the-art methods when tested on the FLAVIA dataset with accuracy of 98 %. We also carried out experiments on traffic signs dataset GTSBR. We investigate in this work the use of simple CNN models to focus on the utility of our descriptor. We propose a new lightweight network for traffic signs that does not require a large number of parameters. The objective of this work is to achieve optimal results in terms of accuracy and to reduce network parameters. This approach could be adopted in real time applications. It classified traffic signs with high accuracy of 99%.
This paper presents the football match prediction using a tree-based model algorithm (C5.0, Random Forest, and Extreme Gradient Boosting). Backward wrapper model was applied as a feature selection methodology to help select the best feature that will improve the accuracy of the model. This study used 10 seasons of football data match history (2007/2008 – 2016/2017) in the English Premier League with 15 initial features to predict the match results. With the tuning process, each model showed improvement in accuracy. Random Forest algorithm generated the best accuracy with 68,55% while the C5.0 algorithm had the lowest accuracy at 64,87% and Extreme Gradient Boosting algorithm produced accuracy of 67,89%. With the output produced in this study, the Decision Tree based algorithm is concluded as not good enough in predicting a football match history.
[...] Read more.Determining the resource requirements at airports especially in-ground services companies is essential to successful planning in the future, which is represented in the resources demand curve according to the future flight schedule, through which staff schedules are created at the airport to cover the workload with ensuring the highest possible quality service provided. Given in the presence of variety service level agreements used on flight service vary according to many flight features, the resources assumption method makes planning difficult. For instance, flight position is not included in future flight schedule but it's efficacious in the identification of flight resources. In this regard, based on machine learning, we propose a model for building a resource demand curve for future flight schedules. It is divided into two phases, the first is the use of machine learning to predict resources of the service level agreement required on future flight schedules, and the second is the use of implement a resource allocation algorithm to build a demand curve based on predicted resources. This proposal could be applicable to airports that will provide efficient and realistic for the resources demand curve to ensure the resource planning does not deviate from the real-time resource requirements. the model has proven good accuracy when using one day of flights to measuring deviation between the proposed model predict demand curve when flights did not include the location feature and the actual demand curve when flights include location.
[...] Read more.Recently, health management systems have some troubles such as insufficient sharing of medical data, security problems of shared information, tampering and leaking of private data with data modeling probes and developing technology. Local learning is performed together with federated learning and differential entropy method to prevent the leakage of medical confidential information, so blockchain-based learning is preferred to completely eliminate the possibility of leakage while in global learning. Qualitative and quantitative analysis of information can be made with information entropy technology for the effective and maximum use of medical data in the local learning process. The blockchain is used the distributed network structure and inherent security features, at the same time information is treated as a whole, not as islands of data. All the way through this work, data sharing between medical systems can be encouraged, access records tampered with, and better support medical research and definitive medical treatment. The M/M/1 queue for the memory pool and M/M/C queue to combine integrated blockchains with a unified learning structure. With the proposed model, the number of transactions per block, mining of each block, learning time, index operations per second, number of memory pools, waiting time in the memory pool, number of unconfirmed transactions in the whole system, total number of transactions were examined.
Thanks to this study, the protection of the medical privacy information of the user during the service process and the autonomous management of the patient’s own medical data will benefit the protection of privacy within the scope of medical data sharing. Motivated by this, proposed a blockchain and federated learning-based data management system able to develop in next studies.
Alzheimer’s illness is an ailment of mind which results in mental confusion, forgetfulness and many other mental problems. It effects physical health of a person too. When treating a patient with Alzheimer's disease, a proper diagnosis is crucial, especially into earlier phases of condition as when patients are informed of the risk of the disease, they can take preventative steps before irreparable brain damage occurs. The majority of machine detection techniques are constrained by congenital (present at birth) data, however numerous recent studies have used computers for Alzheimer's disease diagnosis. The first stages of Alzheimer's disease can be diagnosed, but illness itself cannot be predicted since prediction is only helpful before it really manifests. Alzheimer’s has high risk symptoms that effects both physical and mental health of a patient. Risks include confusion, concentration difficulties and much more, so with such symptoms it becomes important to detect this disease at its early stages. Significance of detecting this disease is the patient gets a better chance of treatment and medication. Hence our research helps to detect the disease at its early stages. Particularly when used with brain MRI scans, deep learning has emerged as a popular tool for the early identification of AD. Here we are using a 12- layer CNN that has the layers four convolutional, two pooling, two flatten, one dense and three activation functions. As CNN is well-known for pattern detection and image processing, here, accuracy of our model is 97.80%.
[...] Read more.Early diabetes diagnosis allows patients to begin treatment on time, reducing or eliminating the risk of serious consequences. In this paper, we propose the Neutrosophic-Adaptive Neuro-Fuzzy Inference System (N-ANFIS) for the classification of diabetes. It is an extension of the generic ANFIS model. Neutrosophic logic is capable of handling the uncertain and imprecise information of the traditional fuzzy set. The suggested method begins with the conversion of crisp values to neutrosophic sets using a trapezoidal and triangular neutrosophic membership function. These values are fed into an inferential system, which compares the most impacted value to a diagnosis. The result demonstrates that the suggested model has successfully dealt with vague information. For practical implementation, a single-value neutrosophic number has been used; it is a special case of the neutrosophic set. To highlight the promising potential of the suggested technique, an experimental investigation of the well-known Pima Indian diabetes dataset is presented. The results of our trials show that the proposed technique attained a high degree of accuracy and produced a generic model capable of effectively classifying previously unknown data. It can also surpass some of the most advanced classification algorithms based on machine learning and fuzzy systems.
[...] Read more.With the constant increase of data induced by stakeholders throughout a product life cycle, companies tend to rely on project management tools for guidance. Business intelligence approaches that are project-oriented will help the team communicate better, plan their next steps, have an overview of the current project state and take concrete actions prior to the provided forecasts. The spread of agile working mindsets are making these tools even more useful. It sets a basic understanding of how the project should be running so that the implementation is easy to follow on and easy to use.
In this paper, we offer a model that makes project management accessible from different software development tools and different data sources. Our model provide project data analysis to improve aspects: (i) collaboration which includes team communication, team dashboard. It also optimizes document sharing, deadlines and status updates. (ii) planning: allows the tasks described by the software to be used and made visible. It will also involve tracking task time to display any barriers to work that some members might be facing without reporting them. (iii) forecasting to predict future results from behavioral data, which will allow concrete measures to be taken. And (iv) Documentation to involve reports that summarize all relevant project information, such as time spent on tasks and charts that study the status of the project. The experimental study carried out on the various data collections on our model and on the main models that we have studied in the literature, as well as the analysis of the results, which we obtained, clearly show the limits of these studied models and confirms the performance of our model as well as efficiency in terms of precision, recall and robustness.
Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.
[...] Read more.