Yunsu Lee

Learn More
Apache Spark framework, which is the implementation of Resilient Distributed Datasets(RDD), is used instead of MapReduce on recent data processing models of Hadoop ecosystem. In this paper, we evaluated the performance and resource usage of real world workloads on scale-up and scale-out clusters using the in-memory caching feature of Spark framework. In our(More)
  • 1