Unicorn: reasoning about configurable system performance through the lens of causality

@article{Iqbal2022UnicornRA,
  title={Unicorn: reasoning about configurable system performance through the lens of causality},
  author={Md Shahriar Iqbal and Rahul Krishna and Mohammad Ali Javidian and Baishakhi Ray and Pooyan Jamshidi},
  journal={Proceedings of the Seventeenth European Conference on Computer Systems},
  year={2022}
}
Modern computer systems are highly configurable, with the total variability space sometimes larger than the number of atoms in the universe. Understanding and reasoning about the performance behavior of highly configurable systems, over a vast and variable space, is challenging. State-of-the-art methods for performance modeling and analyses rely on predictive machine learning models, therefore, they become (i) unreliable in unseen environments (e.g., different hardware, workloads), and (ii) may… 
Performance Health Index for Complex Cyber Infrastructures
  • Sanjeev Sondur, K. Kant
  • Computer Science
    ACM Transactions on Modeling and Performance Evaluation of Computing Systems
  • 2022
TLDR
This paper shows how CHI, which is defined as a configuration scoring system, can take advantage of the domain knowledge and the available available performance data to produce important insights into the configuration settings, and compares the CHI with both well-advertised segmented non-linear models and state-of-the-art data-driven models.
On Debugging the Performance of Configurable Software Systems: Developer Needs and Tailored Tool Support
TLDR
A human-centered approach is taken to identify, design, implement, and evaluate a solution to support developers in the process of debugging the performance of configurable software systems.

References

SHOWING 1-10 OF 112 REFERENCES
Transfer Learning for Performance Modeling of Configurable Systems: A Causal Analysis
TLDR
The causal analysis agrees with previous exploratory analysis and confirms that the causal effects of configuration options can be carried over across environments with high confidence and it is expected that the ability to carry over causal relations will enable effective performance analysis of highly-configurable systems.
Performance-influence models for highly configurable systems
TLDR
This work proposes an approach that derives a performance-influence model for a given configurable system, describing all relevant influences of configuration options and their interactions, and improves over standard techniques in that it smoothly integrates binary and numeric configuration options for the first time.
White-Box Analysis over Machine Learning: Modeling Performance of Configurable Systems
TLDR
Comprex is presented, a white-box approach to build performance-influence models for configurable systems, combining insights of local measurements, dynamic taint analysis to track options in the implementation, compositionality, and compression of the configuration space, without relying on machine learning to extrapolate incomplete samples.
EnCore: exploiting system environment and correlation information for misconfiguration detection
TLDR
A framework and tool called EnCore to automatically detect software misconfigurations, which takes into account two important factors that are unexploited before: the interaction between the configuration settings and the executing environment, as well as the rich correlations between configuration entries.
Generalizable and interpretable learning for configuration extrapolation
TLDR
GIL and GIL+ are evaluated by using them to configure Apache Spark workloads on different hardware platforms and are found to produce comparable, and sometimes even better performance configurations, but with interpretable results.
Learning to sample: exploiting similarities across environments to learn performance models for configurable systems
TLDR
The approach, L2S (Learning to Sample), selects better samples in the target environment based on information from the source environment and outperforms state of the art performance learning and transfer-learning approaches in terms of measurement effort and learning accuracy.
Transfer Learning for Improving Model Predictions in Highly Configurable Software
TLDR
A cost model is defined that transform the traditional view of model learning into a multi-objective problem that not only takes into account model accuracy but also measurements effort as well.
Transfer learning for performance modeling of configurable systems: An exploratory analysis
TLDR
An empirical study on four popular software systems, varying software configurations and environmental conditions, to identify the key knowledge pieces that can be exploited for transfer learning shows that in small environmental changes, by applying a linear transformation to the performance model, the authors can understand the performance behavior of the target environment.
TANDEM: A Taxonomy and a Dataset of Real-World Performance Bugs
TLDR
This paper proposes a taxonomy of performance bugs based on a thorough systematic review of the related literature, divided into three main categories: effects, causes and contexts of bugs, and provides a complete collection of fully documented real-world performance bugs.
BugDoc: A System for Debugging Computational Pipelines
TLDR
This demonstration will illustrate BugDoc's capabilities to debug pipelines using few configuration instances and proposes a new approach that makes provenance to automatically and iteratively infer root causes and derive succinct explanations of failures.
...
...