Work place: SMVITM/CSE.Udupi, 576115, India
E-mail: prabhuswathi2@gmail.com
Website:
Research Interests: Autonomic Computing, Parallel Computing, Analysis of Algorithms
Biography
Ms. Swathi Prabhu, Asst. Professor , Dept. of Computer Science & Engineering, Shri Madhwa Vadiraja Institute of Technology & Management, Bantakal, Udupi. She got her M.Tech (Computer Engineering) from NMAMIT Nitte. She has published few research papers in National and International Conferences and journals. Her interested area is BigData Analysis using Hadoop.
By Guru Prasad M S Nagesh H R Swathi Prabhu
DOI: https://doi.org/10.5815/ijisa.2017.01.08, Pub. Date: 8 Jan. 2017
The Huge amount of Big Data is constantly arriving with the rapid development of business organizations and they are interested in extracting knowledgeable information from collected data. Frequent item mining of Big Data helps with business decision and to provide high quality service. The result of traditional frequent item set mining algorithm on Big Data is not an effective way which leads to high computation time. An Apache Hadoop MapReduce is the most popular data intensive distributed computing framework for large scale data applications such as data mining. In this paper, the author identifies the factors affecting on the performance of frequent item mining algorithm based on Hadoop MapReduce technology and proposed an approach for optimizing the performance of large scale frequent item set mining. The Experiments result shows the potential of the proposed approach. Performance is significantly optimized for large scale data mining in MapReduce technique. The author believes that it has a valuable contribution in the high performance computing of Big Data.
[...] Read more.By Guru Prasad M S Nagesh H R Swathi Prabhu
DOI: https://doi.org/10.5815/ijmecs.2015.12.07, Pub. Date: 8 Dec. 2015
MapReduce is programming model to process the large set of data. Apache Hadoop an implementation of MapReduce has been developed to process the Big Data. Hadoop Cluster sharing introduces few challenges such as scheduling the jobs, processing data locality, efficient resource usage, fair usage of resources, fault tolerance. Accordingly, we focused on a job scheduling system in Hadoop in order to achieve efficiency. Schedulers are responsible for doing task assignment. When a user submits a job, it will move to a job queue. From the job queue, the job will be divided into tasks and distributed to different nodes. By the proper assignment of tasks, job completion time will reduce. This can ensure better performance of the jobs. By default, Hadoop uses the FIFO scheduler. In our experiment, we are discussing and comparing FIFO scheduler with Fair scheduler and Capacity scheduler job execution time.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals