You are here

Putting Data Centers on a Diet: Dynamic, Load-Dependent Rightsizing of Server Capacity

The past decade has witnessed unprecedented growth in the number, size and capacity of data centers that support our daily lives domestically, in commerce, and in governance. Data centers represent the backbone of cloud computing. This is the core infrastructure that supports social networking, provide web services that we access daily from computing devices at all scales, from cell phones to desktops. Data center deployments are growing worldwide at unprecedented rates and scales. U.S. data centers consume about 100 billion kilowatt-hours of electricity annually. Data centers use electricity not just to run the IT equipment but also to power cooling systems that take away the heat dissipated by the servers. They stress out our already taxed power generation units. Unfortunately, current data centers are not very energy-efficient. Significant amounts of energy are wasted in operating data centers.

Data centers are usually designed to target the peak demands but such demands rarely occur. However, data center operators tend to keep all servers online, so as not to miss any requests should the request volume go up suddenly. This practice results in idling servers or servers running at low utilization. These practices lead to low energy efficiency because idling servers dissipate a significant amount of power. An industry-wide study done by the Gartner Group indicates that, on the average, server utilizations are at less than 15% of their maximum capacity.

In the data center industry, a great deal of effort has been spent to reduce inefficiencies in cooling systems. As a result, the energy spent on cooling in today’s newer, relatively well-designed data centers is less than 30% of the total power drawn by the data centers. The bulk of the remaining energy is spent to support the IT equipment. Significant energy savings are possible by better managing server capacities in ways that track demand as it fluctuates from one instant to the next.

This breakthrough ES2 technology automatically provides just the right amount of server capacity needed at any time to handle current offered loads and activates additional servers when the demand grows. This saves power by shutting down servers when demand drops. The improvements in energy efficiency comes from two main practices. First, active servers are operated at high utilization levels; this improves the overall energy efficiency of the IT equipment in the data centers. Second, unused servers are shut off, avoiding any power wastage from idling servers.

The challenge in doing such automatic server capacity provisioning has to do with the time it takes to activate turned-off servers when the load grows. Since it takes a few minutes to turn on a server, a reactive solution that reacts to increased request volume by turning on servers will not work, because increases occur suddenly and can exceed the capacity of the currently turned-on servers well before the servers being turned on are ready to accept requests. The ES2’s breakthrough technology uses a proactive server activation/deactivation strategy that uses the recent history of the actual and offered load to predict the expected load to turn additional servers on in advance, avoiding any service degradation. Additional features permit the degree of cooling provided to dynamically match the capacity of the servers, thus avoiding wastage due to overcooling or damage due to undercooling. Demonstrations of a prototype implementation shows that the new technology permits over 25% reductions in data center IT equipment power draws in realistic scenarios of operation with almost negligible impact on performance.

Economic Impact:

Automatic and dynamic server capacity provisioning in data centers, as enabled by this breakthrough technology, permit data center operating expenditures for the IT equipment to be reduced by as much as 25% without appreciable impacts on performance. Additional cascaded energy savings are possible in cooling system and in power distribution and conversion networks. The technology is particularly well suited to data centers that provide online services such as social networking, email, news, shopping, content searching etc. where the demands can fluctuate rapidly in volume. The resulting drop in electricity draw also lowers the dependence on fossil fuels and the carbon footprint.



For more information, contact Kanad Ghose, ghose@cs.binghamton.edu, Bio www.cs.binghamton.edu/~ghose/, 607.777.4608.

PDF icon ES2-2016.pdf