Learn More
Recently, the demand for data center computing has surged, increasing the total energy footprint of data centers worldwide. Data centers typically comprise three subsystems: IT equipment provides services to customers; power infrastructure supports the IT and cooling equipment; and the cooling infrastructure removes heat generated by these subsystems. This(More)
Data center costs for computer power and cooling are staggering. Because certain physical locations inside the data center are more efficient to cool than others, this suggests that allocating heavy computational workloads onto those servers that are in more efficient places might bring substantial savings. This simple idea raises two critical research(More)
Internet-based applications and their resulting multitier distributed architectures have changed the focus of design for large-scale Internet computing. Internet server applications execute in a horizontally scalable topology across hundreds or thousands of commodity servers in Internet data centers. Increasing scale and power density significantly impacts(More)
With power having become a critical issue in the operation of data centers today, there has been an increased push towards the vision of “energy-proportional computing”, in which no power is used by idle systems, very low power is used by lightly loaded systems, and proportionately higher power at higher loads. Unfortunately, given the state of the art of(More)
Data centers contain IT, power and cooling infrastructures, each of which is typically managed independently. In this paper, we propose a holistic approach that couples the management of IT, power and cooling infrastructures to improve the efficiency of data center operations. Our approach considers application performance management, dynamic workload(More)
Large-scale data centers (~20,000m) will be the major energy consumers of the next generation. The trend towards deployment of computer systems in large numbers, in very dense configurations in racks in a data center, has resulted in very high power densities at room level. Due to high heat loads (~3MWs) in an interconnected environment, data center design(More)
Reduction of resource consumption in data centers is becoming a growing concern for data center designers, operators and users. Accordingly, interest in the use of renewable energy to provide some portion of a data center’s overall energy usage is also growing. One key concern is that the amount of renewable energy necessary to satisfy a typical data(More)
The concept of Grid, based on coordinated resource sharing and problem solving in dynamic, multiinstitutional virtual organizations, is emerging as the new paradigm in distributed and pervasive computing for scientific as well as commercial applications. We assume a global network of data centers housing an aggregation of computing, networking and storage(More)
This paper describes an approach for designing a power management plan that matches the supply of power with the demand for power in data centers. Power may come from the grid, from local renewable sources, and possibly from energy storage subsystems. The supply of renewable power is often time-varying in a manner that depends on the source that provides(More)
A high compute density data center of today is characterized as one consisting of thousands of racks each with multiple computing units. The computing units include multiple microprocessors, each dissipating approximately 250 W of power. The heat dissipation from a rack containing such computing units exceeds 10 KW. Today’s data center, with 1000 racks,(More)