Corpus ID: 199552217

uPredict: A User-Level Profiler-Based Predictive Framework for Single VM Applications in Multi-Tenant Clouds

@article{Moradi2019uPredictAU,
  title={uPredict: A User-Level Profiler-Based Predictive Framework for Single VM Applications in Multi-Tenant Clouds},
  author={Hamidreza Moradi and Wei Wang and Amanda Fernandez and Dakai Zhu},
  journal={ArXiv},
  year={2019},
  volume={abs/1908.04491}
}
Most existing studies on performance prediction for virtual machines (VMs) in multi-tenant clouds are at system level and generally require access to performance counters in Hypervisors. In this work, we propose uPredict, a user-level profiler-based performance predictive framework for single-VM applications in multi-tenant clouds. Here, three micro-benchmarks are specially devised to assess the contention of CPUs, memory and disks in a VM, respectively. Based on measured performance of an… Expand
uPredict: A User-Level Profiler-Based Predictive Framework in Multi-Tenant Clouds
TLDR
UPredict, a user-level profiler-based performance predictive framework for single-VM applications in multitenant clouds, is proposed and a smart load-balancing scheme powered by uPredict is presented and can effectively reduce the execution and turnaround times of the considered application by 19% and 10%, respectively. Expand
DiHi: Distributed and Hierarchical Performance Modeling of Multi-VM Cloud Running Applications
  • Hamidreza Moradi, Wei Wang, Dakai Zhu
  • Computer Science
  • 2020 IEEE 22nd International Conference on High Performance Computing and Communications; IEEE 18th International Conference on Smart City; IEEE 6th International Conference on Data Science and Systems (HPCC/SmartCity/DSS)
  • 2020
TLDR
The results of experiments show that the distributed and hierarchical frameworks can predict the overall applications’ performance effectively with comparable accuracy to the monolithic framework with an average prediction error of 5% for different cluster sizes and clouds. Expand
Cross-Domain Workloads Performance Prediction via Runtime Metrics Transferring
TLDR
This paper synthetically investigates the similarity and difference between a set of workloads and the possibility to transfer between them, and argues that if the runtime metrics data could be transferred before being fed to the prediction model, the knowledge learned in one workload may be reused to other workloads. Expand
Identifying Incident Causal Factors to Improve Aviation Transportation Safety: Proposing a Deep Learning Approach
TLDR
This paper focuses on constructing deep-learning-based models to identify causal factors from incident reports and builds and trains an attention-based long short-term memory (LSTM) model to identify primary and contributing factors in each incident report. Expand

References

SHOWING 1-10 OF 59 REFERENCES
Selecting the best VM across multiple public clouds: a data-driven performance modeling approach
TLDR
This paper presents PARIS, a data-driven system that uses a novel hybrid offline and online data collection and modeling framework to provide accurate performance estimates with minimal data collection, and reduces runtime prediction error by a factor of 4 for some workloads on both AWS and Azure. Expand
Estimating Cloud Application Performance Based on Micro-Benchmark Profiling
TLDR
A cloud benchmarking methodology that uses micro-benchmarks to profile applications and subsequently predicts how an application performs on a wide range of cloud services is developed and it is highlighted that only selected micro- Benchmarks are relevant to estimate the performance of a particular application. Expand
Application Execution Time Prediction for Effective CPU Provisioning in Virtualization Environment
TLDR
NICBLE models the execution of an application workload and employs a simulation-based algorithm to predict the impact on application execution time for a hypothetical VM configuration change on the number of CPUs. Expand
Cloud Performance Modeling with Benchmark Evaluation of Elastic Scaling Strategies
TLDR
It is demonstrated that the proposed cloud performance models are applicable to evaluate PaaS, SaaS and hybrid clouds as well and found that auto-scaling is easy to implement but tends to over provision the resources. Expand
More for your money: exploiting performance heterogeneity in public clouds
TLDR
This work confirms the (oft-reported) performance differences between supposedly identical instances, and leads to identify fruitful targets for placement gaming, such as CPU, network, and storage performance, and develops a formal model for placement strategies and evaluates potential strategies via simulation. Expand
Flexible VM Provisioning for Time-Sensitive Applications with Multiple Execution Options
TLDR
The results show that the proposed MEO-aware algorithms outperform the state-of-the-art schemes that consider only a single execution option of requests by serving up to 38% more requests and achieving up to 27% more benefits. Expand
DeepDive: Transparently Identifying and Managing Performance Interference in Virtualized Environments
TLDR
DeepDive successfully addresses several important challenges, including the lack of performance information from applications, and the large overhead of detailed interference analysis, by transparently identifying and managing performance interference between virtual machines co-located on the same physical machine in Infrastructure-as-a-Service cloud environments. Expand
Predicting Cloud Performance for HPC Applications: A User-Oriented Approach
TLDR
The proposed prediction model is validated for a cloud system implemented with OpenStack and the resulting relative error is below 15% and the Pareto optimal cloud configurations finally found when maximizing application speed and minimizing execution cost on the prediction model are at most 15% away from the actual optimal solutions. Expand
Runtime measurements in the cloud
TLDR
A study of the performance variance of the most widely used Cloud infrastructure (Amazon EC2) from different perspectives using established microbenchmarks to measure performance variance in CPU, I/O, and network and a multi-node MapReduce application to quantify the impact on real dataintensive applications. Expand
CherryPick: Adaptively Unearthing the Best Cloud Configurations for Big Data Analytics
TLDR
CherryPick is a system that leverages Bayesian Optimization to build performance models for various applications, and the models are just accurate enough to distinguish the best or close-to-the-best configuration from the rest with only a few test runs. Expand
...
1
2
3
4
5
...