Vinod Kumar Vavilapalli

Learn More
The initial design of Apache Hadoop [1] was tightly focused on running massive, MapReduce jobs to process a web crawl. For increasingly diverse companies, Hadoop has become the <i>data and computational agor&#225;</i>---the de facto place where data and computational resources are shared and accessed. This broad adoption and ubiquitous usage has stretched(More)
The data is exceedingly large day by day. In some organizations, there is a need to analyze and process the gigantic data. This is a big data problem often faced by these organizations. It is not possible for single machine to handle that data. So we have used Apache Hadoop Distributed File System (HDFS) for storage and analysis. This paper shows(More)
  • 1