Let's Trace It: Fine-Grained Serverless Benchmarking using Synchronous and Asynchronous Orchestrated Applications

@article{Scheuner2022LetsTI,
  title={Let's Trace It: Fine-Grained Serverless Benchmarking using Synchronous and Asynchronous Orchestrated Applications},
  author={Joel Scheuner and Simon Eismann and Sacheendra Talluri and Erwin Van Eyk and Cristina L. Abad and Philipp Leitner and Alexandru Iosup},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.07696}
}
Making serverless computing widely applicable requires detailed performance understanding. Although contemporary benchmarking approaches exist, they report only coarse results, do not apply distributed tracing, do not consider asynchronous applications, and provide limited capabilities for (root cause) analysis. Addressing this gap, we design and implement ServiBench, a serverless benchmarking suite. ServiBench (i) leverages synchronous and asynchronous serverless applications representative of… 

References

SHOWING 1-10 OF 86 REFERENCES

Benchmarking, analysis, and optimization of serverless function snapshots

TLDR
This work introduces vHive, an open-source framework for serverless experimentation with the goal of enabling researchers to study and innovate across the entire serverless stack.

Implications of Programming Language Selection for Serverless Data Processing Pipelines

  • R. CordinglyHanfei Yu W. Lloyd
  • Computer Science
    2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech)
  • 2020
TLDR
It is found that no single language provided the best performance for every stage of a data processing pipeline and the fastest pipeline could be derived by combining a hybrid mix of languages to optimize performance.

SeBS: a serverless benchmark suite for function-as-a-service computing

TLDR
The Serverless Benchmark Suite is proposed: the first benchmark for FaaS computing that systematically covers a wide spectrum of cloud resources and applications and delivers a standardized, reliable and evolving evaluation methodology of performance, efficiency, scalability and reliability of middleware FAAS platforms.

Sequoia: enabling quality-of-service in serverless computing

TLDR
Results with controlled and realistic workloads show Sequoia seamlessly adapts to policies, eliminates mid-chain drops, reduces queuing times by up to 6.4X, enforces tight chain-level fairness, and improves run-time performance up to 25X.

Improving Application Migration to Serverless Computing Platforms: Latency Mitigation with Keep-Alive Workloads

TLDR
This paper presents a case study migration of the Precipitation Runoff Modeling System (PRMS), a Java-based environmental modeling application to the AWS Lambda serverless platform, and investigates performance and cost implications of memory reservation size, and evaluates scaling performance for increasing concurrent workloads.

Characterizing serverless platforms with serverlessbench

TLDR
This paper proposes ServerlessBench, an open-source benchmark suite for characterizing serverless platforms and applies the benchmark suite to evaluate the most popular serverless computing platforms, including AWS Lambda, Open-Whisk, and Fn, and presents new serverless implications from the study.

Architectural Implications of Function-as-a-Service Computing

TLDR
FaaS containerization brings up to 20x slowdown compared to native execution, cold-start can be over 10x a short function's execution time, branch mispredictions per kilo-instruction are 20x higher for short functions, memory bandwidth increases by 6x due to the invocation pattern, and IPC decreases by as much as 35% due to inter-function interference.

Facing the Unplanned Migration of Serverless Applications: A Study on Portability Problems, Solutions, and Dead Ends

TLDR
This work explores the challenges of migrating serverless, FaaS-based applications across cloud providers and presents a categorization of the problems and discusses the feasibility of possible solutions.

Towards Latency Sensitive Cloud Native Applications: A Performance Study on AWS

TLDR
This paper addresses one of the most widely used and versatile cloud platforms, namely Amazon Web Services (AWS), and reveals the delay characteristics of key components and services which impact the overall performance of latency sensitive applications.

Optimizing Latency Sensitive Applications for Amazon's Public Cloud Platform

TLDR
This paper proposes a novel mechanism to optimize the software "layout" based on dynamic performance measurements on Amazon's AWS and defines a combined performance and cost model on CaaS/FaaS (Container/Function as a Service) platforms based on a comprehensive performance analysis.
...