Storage and Memory Characterization of Data Intensive Workloads for Bare Metal Cloud
@article{Makrani2018StorageAM, title={Storage and Memory Characterization of Data Intensive Workloads for Bare Metal Cloud}, author={Hosein Mohammadi Makrani}, journal={ArXiv}, year={2018}, volume={abs/1805.08332} }
As the cost-per-byte of storage systems dramatically decreases, SSDs are finding their ways in emerging cloud infrastructure. Similar trend is happening for main memory subsystem, as advanced DRAM technologies with higher capacity, frequency and number of channels are deploying for cloud-scale solutions specially for non-virtualized environment where cloud subscribers can exactly specify the configuration of underling hardware. Given the performance sensitivity of standard workloads to the…
Figures and Tables from this paper
5 Citations
Design and Implementation of a Multistage Image Compression and Reconstruction System Based on the Orthogonal Matching Pursuit Using FPGA
- Computer Science2019 14th International Conference on Computer Engineering and Systems (ICCES)
- 2019
A new methodology for image compression and reconstruction to enhance the performance while at the same time reducing the bit data size is disclosed.
FPGA Implementation of an ImageCompression and Reconstruction System for the Onboard Radar Using the Compressive Sensing
- Computer Science2019 14th International Conference on Computer Engineering and Systems (ICCES)
- 2019
A new methodology for the image compression based upon the compressive sensing techniques is disclosed and its implementation using the FPGA and the required simulation is disclosed.
Real time FPGA implemnation of SAR radar reconstruction system based on adaptive OMP compressive sensing
- Computer Science
- 2020
A new adaptive OMP algorithm to overcome the computational complexity of the iterative algorithms is presented, which improves the probability of detection at lower SNRs, reduce the computational operations as well as the number of required iterations.
Compressive Sensing on Storage Data: An Effective Solution to Alleviate I/0 Bottleneck in Data- Intensive Workloads
- Computer Science2018 IEEE 29th International Conference on Application-specific Systems, Architectures and Processors (ASAP)
- 2018
By using Compressive Sensing (CS), a lossy data compression method, the bottleneck is lifted from the storage, increasing the bandwidth utilization of the memory to gain further performance improvement from a high-end memory.
References
SHOWING 1-10 OF 42 REFERENCES
MeNa: A memory navigator for modern hardware in a scale-out environment
- Computer Science2017 IEEE International Symposium on Workload Characterization (IISWC)
- 2017
A developed predictive model, MeNa, which is a methodology to maximize the performance/cost ratio of scale-out applications running in cloud environment is proposed, and it is shown how MeNa can be effectively leveraged for server designers to find architectural insights or subscribers to allocate just enough budget to maximize performance of their applications in cloud.
Memory system characterization of big data workloads
- Computer Science2013 IEEE International Conference on Big Data
- 2013
This paper develops an analysis methodology to understand how conventional optimizations such as caching, prediction, and prefetching may apply to Hadoop and noSQL big data workloads, and discusses the implications on software and system design.
Understanding the role of memory subsystem on performance and energy-efficiency of Hadoop applications
- Computer Science2017 Eighth International Green and Sustainable Computing Conference (IGSC)
- 2017
The experimental results showed that DRAM frequency as well as number of channels do not play a significant role on the performance of Hadoop workloads, and indicated that increasing the number of DRAM channels reduces DRAM power and improves the energy-efficiency ofHadoop MapReduce applications.
Main-Memory Requirements of Big Data Applications on Commodity Server Platform
- Computer Science2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)
- 2018
The results reveal that iterative tasks in Spark and MPI are benefiting from a high bandwidth and large capacity memory, and neither DRAM capacity, frequency, nor the number of channels play a critical role on the performance of all studied Hadoop as well as most studied Spark applications.
Memory requirements of hadoop, spark, and MPI based big data applications on commodity server class architectures
- Computer Science2017 IEEE International Symposium on Workload Characterization (IISWC)
- 2017
Empirical analysis of different memory configurations available on commodity hardware indicates that increasing the number of DRAM channels reduces DRAM power and improves the energy-efficiency across all three frameworks.
Quantifying the Performance Impact of Memory Latency and Bandwidth for Big Data Workloads
- Computer Science2015 IEEE International Symposium on Workload Characterization
- 2015
This work presents straightforward analytic equations to quantify the impact of memory bandwidth and latency on workload performance, and demonstrates how the values of the components of these equations can be used to classify different workloads according to their inherent bandwidth requirement and latency sensitivity.
VENU: Orchestrating SSDs in hadoop storage
- Computer Science2014 IEEE International Conference on Big Data (Big Data)
- 2014
VENU is presented, a dynamic data management system for Hadoop that aims to improve overall I/O throughput via effective use of SSDs as a cache for the slower HDDs, not for all data, but for only the workloads that are expected to benefit from SSDs.
Efficient virtual memory for big memory servers
- Computer ScienceISCA
- 2013
This work proposes mapping part of a process's linear virtual address space with a direct segment, while page mapping the rest of thevirtual address space to remove the TLB miss overhead for big-memory workloads.
Clearing the clouds: a study of emerging scale-out workloads on modern hardware
- Computer ScienceASPLOS XVII
- 2012
This work identifies the key micro-architectural needs of scale-out workloads, calling for a change in the trajectory of server processors that would lead to improved computational density and power efficiency in data centers.
Characterizing Hadoop applications on microservers for performance and energy efficiency optimizations
- Computer Science2016 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS)
- 2016
Through methodical investigation of performance and power measurements, this work demonstrates how the interplay among various Hadoop configurations and system and architecture level parameters affect the performance and energy-efficiency across various Hopop applications.