IJIEEB Vol. 7, No. 2, Mar. 2015
Cover page and Table of Contents: PDF (size: 134KB)
REGULAR PAPERS
Many software development companies faced the challenge of coupling agile models and global software development in distributed projects. The challenges are related to the communication, management and control of the development process. These challenges emerged because global development involves developers from different geographical locations, time zones and with different cultures. Coupling agile models and global software development seems impossible at first glance, as they require frequent communications, rapid development, and resources management of distributed teams. However, researchers proposed several solutions and advices included tailoring of agile practices and addition of non-agile practices. Nevertheless, further efforts were needed to activate and unify these solutions. This paper introduces a new web based project management system for Scrum projects called global scrum management. It is a web application that manages Scrum process from planning phase to delivery of the last product increment. Moreover, the application features social networks functionalities to provide seamless communication, collaboration and knowledge transferring to the distributed team members. Also, developers' actions to the sprint tasks, including updating task status, will be reflected to the burn down charts of the sprint and product backlog instantly. Several diagrams are provided in this paper to explain the solution including UML diagrams.
[...] Read more.Software quality measurement is the key factor in the development of any software system. Various software quality models are devised to measure the performance of a software system, which consists of numerous quality parameters on the basis of which software are quantified. Different types of software quality models are already present like an ISO/IEC9126 Quality model, Boehm's model, McCall's model, etc. In this paper, an attempt has been made to increase the quality of a software system by introducing some new quality parameters in ISO/IEC9126 model. Since the quality parameters are very unpredictable in nature, so as to evaluate the performance of quality parameters, the fuzzy multi criteria approach has been used.
[...] Read more.Machine Learning techniques are taking place in all areas of our lives, to help us to make decisions. There is a large number of algorithms available for multiple purposes and appropriate for specific data types. That is why it is required to pay special attention to decide which is the recommended technique, to use in each case. K Star is an instance-based learner that tries to improve its performance for dealing with missing values, smoothness problems and both real and symbolic valued attributes; but it is not known much information about how the way it faces attribute and class noisy, and with mixed values of the attributes in the datasets. In this paper we made six experiments with Weka, to compare K Star and other important algorithms: Naïve Bayes, C4.5, Support Vector Machines and k-Nearest Neighbors, taking into account its performance classifying datasets with those features. As a result, K Star demonstrated to be the best of them in dealing with noisy attributes and with imbalanced attributes.
[...] Read more.The Web provides access to substantial amount of information. Metadata that means data about data enables the discovery of such information. When the metadata is effectively used, it increases the usefulness of the original data/resource and facilitates the resource discovery. Resource Description Framework (RDF) is a basis for handling these metadata and is a graph-based, self-describing data format that represents information about web-based resources. It is necessary to store the data persistently for many Semantic Web applications that were developed on RDF to perform effective queries. Because of the difficulty of storing and querying RDF data, several storage techniques have been proposed for these tasks. In this paper, we present the motivations for using the RDF data model. Several storage techniques are discussed along with the methods for optimizing the queries for RDF datasets. We present the differences between the Relational Database and the XML technology. Additionally, we specify some of the use cases for RDF. Our findings will shed light on the current achievements in RDF research by comparing the different methodologies for storage and optimization proposed so far, thus identifying further research areas.
[...] Read more.The present system performs analysis of snapshots of cursive and non-cursive font character text images and yields customizable text files using optical character recognition technology. In the previous versions the authors have discussed the user training mechanism that introduces new non-cursive font styles and writing formats into the system and incorporates optimization, noise reduction and background detection modules. This system specifically focuses on enhancing the process of character recognition by introducing a mechanism for handling simple cursive fonts.
[...] Read more.The prolific growth of the Internet density has replaced native applications with web based applications. Current trend of web applications is moving towards fat client architecture, which results in a large codebase of the client side of web applications. Manual management of this huge code is tedious and time consuming for de-velopers. We present a technique to construct a depend-ency graph to provide an overview of the code showing the inter-dependency of the code elements. We conduct a dynamic analysis to make the JavaScript call graph to address the dynamic nature of JavaScript. We further integrate HTML and CSS with the JavaScript call graph to make a dependency graph. Because we can accurately identify the HTML and CSS relations, the result of the dependency graph depends on the JavaScript call graph. Our evaluation of the JavaScript call graph on six web applications demonstrates that the precision is high for the large applications and relatively low for small applications. The recall is low for large applications and relatively higher for small applications.
[...] Read more.Accurate detection of orthologous proteins is a key aspect of comparative genomics. Orthologs in different species can be used to predict the function of uncontrived genes from model organisms as they retain the same biological function through the path of evolution. Orthologs can be inferred using phylogenetic, pair-wise similarity or synteny based methods. The study here describes a computational method for detecting orthologs of a protein. A phylogenetic tree based approach is used for identification of orthologous proteins. A Combination of species overlap algorithm and patristic distances is used for detecting orthologs of a protein from a set of FASTA sequences. Patristic distances have been used to drill the orthology predictions of any protein down to its closest orthologs. The approach gives a considerably good accuracy and has high specificity and precision. The use of Distance threshold allows controlling the stringency level of predictions so that the closeness and proximity between the protein of interest and its orthologs can be adjusted.
[...] Read more.Dimensionality reduction is generally performed when high dimensional data like text are classified. This can be done either by using feature extraction techniques or by using feature selection techniques. This paper analyses which dimension reduction technique is better for classifying text data like emails. Email classification is difficult due to its high dimensional sparse features that affect the generalization performance of classifiers. In phishing email detection, dimensionality reduction techniques are used to keep the most instructive and discriminative features from a collection of emails, consists of both phishing and legitimate, for better detection. Two feature selection techniques - Chi-Square and Information Gain Ratio and two feature extraction techniques – Principal Component Analysis and Latent Semantic Analysis are used for the analysis. It is found that feature extraction techniques offer better performance for the classification, give stable classification results with the different number of features chosen, and robustly keep the performance over time.
[...] Read more.