Characterizing Concurrency Mechanisms for NVIDIA GPUs under Deep Learning Workloads
@article{Gilman2021CharacterizingCM, title={Characterizing Concurrency Mechanisms for NVIDIA GPUs under Deep Learning Workloads}, author={Guin Gilman and Robert J. Walls}, journal={ArXiv}, year={2021}, volume={abs/2110.00459} }
Figures and Tables from this paper
3 Citations
Performance and Power Prediction for Concurrent Execution on GPUs
- Computer ScienceACM Transactions on Architecture and Code Optimization
- 2022
This paper proposes the first machine learning-based predictor to predict the performance and power of an ensemble of applications on a GPU, and shows that by using the execution statistics of standalone workloads and the fairness of execution when these workloads are executed with three representative microbenchmarks, it can get a reasonably accurate prediction.
Aryl: An Elastic Cluster Scheduler for Deep Learning
- Computer Science
- 2022
Aryl, a new cluster scheduler that introduces the notion of server preemption cost which it greedily reduces during server reclaiming, and relies on the JCT reduction value defined for each additional worker for an elastic job to solve the scheduling problem as a multiple-choice knapsack problem.
Characterizing Concurrency Mechanisms for NVIDIA GPUs under Deep Learning Workloads (Extended Abstract)
- Computer ScienceACM SIGMETRICS Performance Evaluation Review
- 2022
Hazelwood et al. observed that at Facebook data centers, variations in user activity (e.g. due to diurnal load) resulted in low utilization periods with large pools of idle resources [4]. To make use…
References
SHOWING 1-10 OF 22 REFERENCES
Demystifying the Placement Policies of the NVIDIA GPU Thread Block Scheduler for Concurrent Kernels
- Computer ScienceSIGMETRICS Perform. Evaluation Rev.
- 2020
This work empirically derive the Scheduler's behavior under concurrent workloads for NVIDIA's Pascal, Volta, and Turing microarchitectures and finds that the scheduler chooses the next SM based on the SM's local resource availability.
Improving GPGPU concurrency with elastic kernels
- Computer ScienceASPLOS '13
- 2013
This work studies concurrent execution of GPU kernels using multiprogram workloads on current NVIDIA Fermi GPUs, and proposes transformations that convert CUDA kernels into elastic kernels which permit fine-grained control over their resource usage.
Warped-Slicer: Efficient Intra-SM Slicing through Dynamic Resource Partitioning for GPU Multiprogramming
- Computer Science2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA)
- 2016
Warped-Slicer is proposed, a dynamic intra-SM slicing strategy that uses an analytical method for calculating the SM resource partitioning across different kernels that maximizes performance and is also computationally efficient.
Deadline-Based Scheduling for GPU with Preemption Support
- Computer Science2018 IEEE Real-Time Systems Symposium (RTSS)
- 2018
This paper presents the design of a prototype real-time scheduler for GPU activities on an embedded System on a Chip featuring a cutting edge GPU architecture by NVIDIA adopted in the autonomous driving domain, and it leverages latest generation architectural features, such as pixel-level preemption and thread level preemption.
Enabling preemptive multiprogramming on GPUs
- Computer Science2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA)
- 2014
This paper argues for preemptive multitasking and design two preemption mechanisms that can be used to implement GPU scheduling policies and extends the NVIDIA GK110 (Kepler) like GPU architecture to allow concurrent execution of GPU kernels from different user processes and implements a scheduling policy that dynamically distributes the GPU cores among concurrently running kernels, according to their priorities.
Dissecting the CUDA scheduling hierarchy: a Performance and Predictability Perspective
- Computer Science2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)
- 2020
This paper corrects and consolidates previously published assumptions about the hierarchical scheduling policies of NVIDIA GPUs and their proprietary CUDA application programming interface and discusses how such mechanisms evolved with recently released GPU micro-architectures, and how such changes influence the scheduling models to be exploited by real-time system engineers.
AntMan: Dynamic Scaling on GPU Clusters for Deep Learning
- Computer ScienceOSDI
- 2020
AntMan, a deep learning infrastructure that co-designs cluster schedulers with deep learning frameworks and has been deployed in production at Alibaba to manage tens of thousands of daily deep learning jobs across thousands of GPUs, is presented.
GSLICE: controlled spatial sharing of GPUs for a scalable inference platform
- Computer ScienceSoCC
- 2020
GSLICE virtualizes the GPU by apportioning the GPU resources across different Inference Functions (IFs), thus providing isolation and guaranteeing performance and develops self-learning and adaptive GPU resource allocation and batching schemes that account for network traffic characteristics, while also keeping inference latencies below service level objectives.
CuMAS: Data Transfer Aware Multi-Application Scheduling for Shared GPUs
- Computer ScienceICS
- 2016
It is demonstrated that the data-transfer aware nature of CuMAS framework improves the throughput of simultaneously executed CUDA applications by up to 44% when run on NVIDIA K40c GPU using applications from CUDA SDK and Rodinia benchmark suite.
Chimera: Collaborative Preemption for Multitasking on a Shared GPU
- Computer ScienceASPLOS
- 2015
Chimera first introduces streaming multiprocessor flushing, which can instantly preempt an SM by detecting and exploiting idempotent execution, and utilizes flushing collaboratively with two previously proposed preemption techniques for GPUs, namely context switching and draining to minimize throughput overhead while achieving a required preemption latency.