Donald Newell

Learn More
As manycore architectures enable a large number of cores on the die, a key challenge that emerges is the availability of memory bandwidth with conventional DRAM solutions. To address this challenge, integration of large DRAM caches that provide as much as 5× higher bandwidth and as low as 1/3rd of the latency (as compared to conventional DRAM) is(More)
As we enter the era of CMP platforms with multiple threads/cores on the die, the diversity of the simultaneous workloads running on them is expected to increase. The rapid deployment of virtualization as a means to consolidate workloads on to a single platform is a prime example of this trend. In such scenarios, the quality of service (QoS) that each(More)
Virtualization is already becoming ubiquitous in data centers for the consolidation of multiple workloads on a single platform. However, there are very few performance studies of server consolidation workloads in the literature. In this paper, our goal is to analyze the performance characteristics of a representative server consolidation workload. To(More)
As multi-core architectures flourish in the marketplace, multi-application workload scenarios (such as server consolidation) are growing rapidly. When running multiple applications simultaneously on a platform, it has been shown that contention for shared platform resources such as last-level cache can severely degrade performance and quality of service(More)
Data centers are increasingly employing virtualization and consolidation as a means to support a large number of disparate applications running simultaneously on server platforms. However, server platforms are still being designed and evaluated based on performance modeling of a single highly parallel application or a set of homogenous work-loads running(More)
As dual-core and quad-core processors arrive in the marketplace, the momentum behind CMP architectures continues to grow strong. As more and more cores/threads are placed on-die, the pressure on the memory subsystem is rapidly increasing. To address this issue, we explore DRAM cache architectures for CMP platforms. In this paper, we investigate the impact(More)
With the advent of dual-core chips in the marketplace, small-scale CMP (chip multiprocessor) architectures are becoming commonplace. We expect a continuing trend of increasing the number of cores on a die to maximize the performance/power efficiency of a single chip. We believe an era of large-scale CMPs (LCMPs) with several tens to hundreds of cores is on(More)
I/O virtualization techniques developed recently have led to significant changes in network processing. These techniques require network packets go through additional layers of processing. These additional layers have introduced significant overheads. So it is important to understand performance implications of this additional processing on network(More)
1389-1286/$ see front matter 2009 Elsevier B.V doi:10.1016/j.comnet.2009.04.015 * Corresponding author. Tel.: +1 503 712 3996; fa E-mail address: ravishankar.iyer@intel.com (R. Iy With cloud and utility computing models gaining significant momentum, data centers are increasingly employing virtualization and consolidation as a means to support a large number(More)