Simplified mapreduce mechanism for large scale data processing
MapReduce has become a popular programming model for processing and running large-scale data sets with a parallel, distributed paradigm on a cluster. Hadoop MapReduce is needed especially for large scale data like big data processing. In this paper, we work to modify the Hadoop MapReduce Algorithm and implement it to reduce processing time.
MapReduce; Large Scale Data; Hadoop; Simplified Algorithm; Performance Analysis
Md Tahsir Ahmed Munna, Shaikh Muhammad Allayear, Mirza Mohtashim Alam, Sheikh Shah Mohammad Motiur Rahman, Md Samadur Rahman, M Mesbahuddin Sarker