IJIEEB Vol. 10, No. 4, Jul. 2018
Cover page and Table of Contents: PDF (size: 242KB)
REGULAR PAPERS
The study observes the metadata identification and analysis for Document, Audio, Image, and Videos. The process uses MapReduce and Match Aggregate Pipeline to identify, classify, and categories for identification purposes. The inputs are FITS array results and processed in form of XML. The works consist of the extraction process, identification and analysis, classification, and metadata information. The objective is establishing the file information based on volume, variety, veracity, and velocity criteria as part of task identification component in Self-Assignment Data Management. Testing is done for all file types with the number of files and the size of the file according to the grouping. The results show that there is a pattern where the match-aggregate-pipeline has a longer processing time than MapReduce on a small block size, shown in a block size of 64 Mb, 128 Mb, and 256 Mb. But once the block size is magnified the match-aggregate-pipeline has faster processing time at 1024 Mb and 2048 Mb. The results have a contribution in the metadata processing for large files can be done by arranging the block sizes in Match Aggregate Pipeline.
[...] Read more.As the technology era has been changing, the designing pattern of an IC is also changing. An IC de-signing is now divided into two definite fields i.e. Front-End design and Back-End design. The Front-End design is using HDLs (Hardware Description Languages i.e. VHDL or Verilog) and the verification of those ICs, whereas the Back-End Design is related to the Physical Design techniques. But both of the IC design techniques required some extra efforts in terms of their Speed, Shape, and Size, which needs the Optimization efforts. This pa-per deals with the area and power optimization efforts in terms of the logic utilization by using XST & Vivado Tools. After applying area optimization techniques i.e. Logic Optimization, LUT mapping and Resource Sharing etc. on already designed asynchronous microprocessor to be used as model for proposed optimization, reasonable results in terms of power and area utilization have been achieved.
[...] Read more.This paper discusses the roles of communication and coordination (C&C) in the agile teams. C&C are important activities that a project manager has to deal with tactically during the development of software projects to avoid the consequences. Their importance further increases especially in case of distributed software development (DSD). C&C are considered as project drivers to accomplish a project successfully within budget and schedule. There are several issues associated to poor C&C those can lead to fail software projects such as budget deficit, delay in delivery, conflicts among team members, unclear goals of project and architectural, technical and integration dependencies. C&C issues are critical and vital for collocated teams but their presences in distributed teams are disastrous. Scrum is one of the most widely practiced agile models and it is gaining further popularity in the agile community. Therefore, a novel framework is proposed to address the issues that are associated to C&C using Scrum methodology. The proposed framework is validated through a questionnaire. The results are found supportive to reflect that it will help to resolve the C&C issues effectively and efficiently.
[...] Read more.In this paper, authors have proposed a technique which uses the existing database of chess games and machine learning algorithms to predict the game results. Authors have also developed various relationships among different combinations of attributes like half-moves, move sequence, chess engine evaluated score, opening sequence and the game result. The database of 10,000 actual chess games, imported and processed using Shane’s Chess Information Database (SCID), is annotated with evaluation score for each half-move using Stockfish chess engine running constantly on depth 17. This provided us with a total of 8,40,289 board evaluations. The idea is to make the Multi-Variate Linear Regression algorithm learn from these evaluation scores for same sequence of opening moves and game outcome, then using it to calculate the winning score of a side for each possible move and thus suggesting the move with highest score. The output is also tested with including move details. Game attributes are also classified into classes. Using Naïve Bayes classification, the data result is classified into three classes namely move preferable to white, black or a tie and then the data is validated on 20% of the dataset to determine accuracies for different combinations of considered attributes.
[...] Read more.Recommendation system plays an essential role in searching any information from World Wide Web. Recommender system handles Information straining problem and improve customer correlation by providing best services. It suggests items or services to users according to their interest, navigation behavior or demographic information. This paper performs a survey on different approaches available for recommender system and performs a comparative analysis of different algorithms. Further, a discussion about various application areas has been done. At the end, issues and challenges in recommender systems have been discussed.
[...] Read more.As the world is getting digitized the speed in which the amount of data is over owing from different sources in different format, it is not possible for the traditional system to compute and analysis this kind of big data for which big data tool like Hadoop is used which is an open source software. It stores and computes data in a distributed environment. In the last few years developing Big Data Applications has become increasingly important. In fact many organizations are depending upon knowledge extracted from huge amount of data. However traditional data technique shows a reduced performance, accuracy, slow responsiveness and lack of scalability. To solve the complicated Big Data problem, lots of work has been carried out. As a result various types of technologies have been developed. As the world is getting digitized the speed in which the amount of data is over owing from different sources in different format, it is not possible for the traditional system to compute and analysis this kind of big data for which big data tool like Hadoop is used which is an open source software. This research work is a survey about the survey of recent optimization technologies and their applications developed for Big Data. It aims to help to choose the right collaboration of various Big Data technologies according to requirements.
[...] Read more.