Corpus ID: 238198576

How Low Can You Go? Practical cold-start performance limits in FaaS

  title={How Low Can You Go? Practical cold-start performance limits in FaaS},
  author={Yue Tan and David Liu and Nanqinqin Li and Amit A. Levy},
  • Yue Tan, David Liu, +1 author A. Levy
  • Published 27 September 2021
  • Computer Science
  • ArXiv
Function-as-a-Service (FaaS) has recently emerged as a new cloud computing paradigm. It promises high utilization of data center resources through allocating resources on demand at per-function request granularity. High cold-start overheads, however, have been limiting FaaS systems’ such potential. Prior work has recognized that time redundancy exists across different cold function invocations and has proposed varied snapshots that capture the instantaneous execution state so allow for jump… Expand

Figures and Tables from this paper


SEUSS: skip redundant paths to make serverless fast
This paper presents a system-level method for achieving the rapid deployment and high-density caching of serverless functions in a FaaS environment, and is able to cache over 50,000 function instances in memory as opposed to 3,000 using standard OS techniques. Expand
Putting the "Micro" Back in Microservice
A novel design for providing "functions as a service" (FaaS) that attempts to be truly micro: cold launch times in microseconds that enable even finer-grained resource accounting and support latency-critical applications. Expand
Catalyzer: Sub-millisecond Startup for Serverless Computing with Initialization-less Booting
Fundamentally, Catalyzer removes the initialization cost by reusing state, which enables general optimizations for diverse serverless functions, and significantly reduces the end-to-end latency for real-world workloads. Expand
OSv - Optimizing the Operating System for Virtual Machines
The design and implementation of OSv is presented, a new guest operating system designed specifically for running a single application on a virtual machine in the cloud that addresses the duplication issues by using a low-overhead library-OS-like design. Expand
Splinter: Bare-Metal Extensions for Multi-Tenant Low-Latency Storage
Splinter is designed for modern multi-tenant data centers; it allows mutually distrusting tenants to write their own fine-grained extensions and push them to the store at runtime, and makes granular storage functions that perform less than a microsecond of compute practical. Expand
Replayable Execution Optimized for Page Sharing for a Managed Runtime Environment
Replayable Execution uses checkpointing to save an image of an application, allowing this image to be shared across containers and resulting in speedy restoration at service startup, and offers 2X memory footprint reduction, and over 10X startup time improvement. Expand
SOCK: Rapid Task Provisioning with Serverless-Optimized Containers
This work analyzes Linux container primitives, identifying scalability bottlenecks related to storage and network isolation, and implements SOCK, a container system optimized for serverless workloads. Expand
My VM is Lighter (and Safer) than your Container
It is found that VMs can be as nimble as containers, as long as they are small and the toolstack is fast enough, and a new virtualization solution based on Xen that is optimized to offer fast boot-times regardless of the number of active VMs is presented. Expand
Firecracker: Lightweight Virtualization for Serverless Applications
Firecracker is a new open source Virtual Machine Monitor specialized for serverless workloads, but generally useful for containers, functions and other compute workloads within a reasonable set of constraints. Expand
SnowFlock: rapid virtual machine cloning for cloud computing
SnowFlock provides sub-second VM cloning, scales to hundreds of workers, consumes few cloud I/O resources, and has negligible runtime overhead, and to evaluate SnowFlock, the implementation of the VM fork abstraction. Expand