Binghamton University: The State University of New York
The Georgia Institute of Technology
University of Texas at Arlington
Last Reviewed: (not done)
ES2 develops methodologies for efficiently operating electronic systems, including data centers, by controlling cooling resources and managing workloads to achieve optimal energy consumption.
There are thousands of data centers across the United States that handle a wide spectrum of information and processing needs for the government, military, business and industry. Data center services for entertainment, e-business, finances, cloud computing and healthcare continue to grow at a dramatic rate.
Data centers consume massive amounts of energy and about half of this energy is wasted due to many inefficiencies in the system. The problem is exacerbated by the technology trends that drive down the form factors of the electronics/IT equipment, resulting in high power densities. With such high heat loads, the cooling of data centers while maintaining equipment performance is one of the biggest challenges faced by data center operators.
Binghamton University, State University of New York, and its partners: The University of Texas at Arlington, Villanova University and The Georgia Institute of Technology have established a National Science Foundation Industry/University Cooperative Research Center (I/UCRC) in the area of energy-smart electronic systems. The I/UCRC in Energy-Smart Electronic Systems works in partnership with industry and academia to develop systematic methodologies for operating electronic systems and cooling equipment synergistically, as dynamic self-sensing and self-regulating systems that are predictive, stable and verified in real time. The center brings together computer scientists, mechanical and electrical engineers in a synergistic multi disciplinary team to address these issues.
Energy Efficient Systems
In response to the need for new dynamic, predictive, and synergistic energy optimization and thermal management design criteria for electronic systems, the Center will focus on the development of systematic methodologies for operating electronic systems, including data centers, as dynamic self sensing and regulating systems that are predictive and verified in real time. Algorithms will be developed to control cooling resources and to assist expert system schedulers to schedule and/or migrate workload and simultaneously adjust the cooling system output to achieve optimal energy consumption. New scheduling and workload prediction and management policies need to be developed at the software level, including kernel scheduling policies to operate the IT equipment within pre-specified energy limits. Thermal management resources will also be allocated dynamically in response to the workload allocation and scheduling policies. This is by nature an intrinsically multi-disciplinary research area integrating software algorithms at the operating system and workload scheduling levels, control systems, thermal management and hardware. New models will have direct application to a variety of electronic/computing systems ranging all the way from the chip or device level, to the system or data center level. Problem-oriented research related to improving the energy efficiency of electronic systems, including data centers, ranging from chip architectures, software design, and multi-scale thermal management techniques will be addressed during the Center’s first five year program.
Problem-oriented research related to the industry’s search for optimizing the energy-efficiency of electronic systems and data centers will be addressed during the Center’s first five year program. The research agenda is unique in its use of techniques that synergistically combine: (a) computing (including software and microarchitectural techniques), (b) innovative cooling solutions and thermal management techniques and, (c) adaptive, pro-active and scalable control system concepts in devising a system-level solution. The initial research portfolio includes:
Determining key intrinsic energy consumption inefficiencies at every level from devices to entire systems such as avionics, communication infrastructures, and data centers to address the inherent energy-efficiency of all components – mechanical, IT or software, efficiency in the energy delivery and conversion process and the inefficiencies in the interactions of the components within the system.
Techniques for managing data centers synergistically using predictive models for computing workload and thermal trends, driven by live models in the control loop.
Software techniques implemented within the kernel for implementing job scheduling and server local configuration settings based on the use of energy budgets associated with individual servers to permit predictive and synergistic management of the cooling system using a scalable control system.
Microarchitectural techniques for improving energy-efficiency of server chips and memory systems.
Airflow management techniques in data centers based on fast, compact models that are continuously validated with live data and using these models to drive job allocation and virtualization mechanisms to reduce the overall energy expenditures.
Software techniques at the kernel, application and compiler levels for exploiting emerging non-volatile memory technologies in improving the intrinsic energy-efficiency of servers.
Techniques for improving the energy efficiency of buildings and containers.
Thermal Management and Controls
Several current research projects deal with modeling and managing and controlling thermal flows and cooling in and around servers, racks and floor tiles.
Waste Energy Recovery
This reserach thrust analyzes research done to date on various methods of recovering waste energy from servers for reuse.
Workload Scheduling and Virtualization
This research thrust deals with managing workloads to run servers at peak efficiency by establishing predictive models to determine when servers should be running and when they should be placed in a hybernation state to conserve energy.
Hybrid AC/DC Powered Data Centers
The NSF recently awarded a "Clusters for Grand Challenges" award to ES2 and GRAPES for research in hybrid AC/DC powered data centers. Currently, there are significant power losses as AC power is distributed and converted to low-voltage DC power that is needed for chips within servers, networking and storage devices. Virtualization and other IT management techniques reduce the effective power delivery in data centers. Energy needs for data centers are significant and growing.
This cluster will do research addressing various aspects of incorporating DC power to data centers, including highlighting the advantages of using DC power, demonstrating the advantages of incorporating renewable energy sources into the power supply, establishing new metrics for AC- and DC-powered data centers, validating improvements in operational reliability and availability and examining new devices and cooling systems for DC power conversion.
Binghamton University has amassed a vast infrastructure for conducting energy efficient systems research by academic and industrial partners. Growth in this area is enabled by a new $30M building, which opened in 2014. Technical expertise and infrastructure at the Integrated Electronics Engineering Center (IEEC) and Analytical and Diagnostics Laboratory (ADL) are maintained by Ph.D. level professional staff. Electronics packaging facilities at the (IEEC), a New York State Center of Advanced Technology, include an on-site demonstration facility that allows for rigorous and replicable testing of new technologies. Laboratories are equipped for analyzing electronics packaging technology products and are useful for performing physical, chemical, surface and electronic analysis of products and materials. Services include: measurement of material properties; design and reliability testing; and product analysis for determining manufacturing viability. Instrumentation includes: Thermal cycling chambers (4); Temperature and humidity chambers (3); Thermal shock chamber; HAST chamber; Shock tables (2); a JEDEC standard; Shaker table with oven; Material testing systems; Digital image correlation system; Shadow/cross sectional moiré; and Nanoindentation/AFM. The Analytical and Diagnostics Laboratory (ADL) is a $21M, 8,000 sq. ft. multiuser facility funded by New York State with the goal of enabling high technology commercialization for industry. Staffed with four Ph.D-level scientific staff, instrumentation available to researchers and partners includes FIB, TEM, thermal analysis tooling, SEM, AFM, confocal microscopy, X-ray diffractometry, and a microfabrication facility. Computer laboratories are extensively networked , wireless networked, and connected to Internet II. The Computer Science department has its own technical staff for maintaining the research equipment: Two large Dell Clusters (64 Dual-core, dual-CPU nodes), with 15K SCSI, 4 GB of RAM and 8 TBytes of external RAID (shared with other researchers);16 AMD Dual-Core workstations and 11 dual-CPU Xeon machines dedicated for exclusive use by the architecture research group led by the co-PIs; One 32 node, dual Xeon server cluster; and Several Quad-Core Xeon servers. The Center also has plans to construct and locate a real, operational data center at partner facilities in Binghamton, NY. Industry partners will be able to showcase best-of-class technologies and test their most sophisticated research and development initiatives and investigate optimized inter-operability of equipment from different vendors in a 7,800 square foot facility with three independently controlled/managed zones for member-defined research projects.
University of Texas at Arlington: An electronic cooling lab has equipment related to air cooling. The Nanotechnology Research & Teaching Facility (NanoFab) at UTA features a 10,000-square-foot clean room. Several additional measurement laboratories with complete instrumentation are also located in the facility. A Stereo Particle Image Velocimetry (SPIV) system is housed at the Aerodynamics Research Center (ARC) and is available to carry out experimental flow visualization. Center will partner with UTA’s Texas Manufacturing Assistance Center (TMAC), which regularly performs utilities assessments of regional companies to understand energy usage in operational facilities. A strong collaboration with UTA’s Advanced Robotics Research Institute (ARRI) which engages in novel processes and tools for the assembly, packaging, and integration of manufacturable devices and systems at the microscale, will also be leveraged.
Villanova University: The Villanova University Laboratory for Advanced Thermal and Fluid Systems (www.villanova.edu/latfs/facilities.html) is a modern, comprehensive laboratory for fundamental investigations in thermal transport and characterization of thermal management in electronics, energy, and propulsion systems. The laboratory houses several major facilities and many customized rigs. The Low Speed Boundary Layer Wind Tunnel is a versatile open-return wind with flow velocities up to 60 m/s and freestream turbulence of less than 1%. The Closed Return Aerodynamics Wind Tunnel is a commercial closed return wind with a cross section of 2 ft x 2 ft and flow velocities as high as 55 m/s or 120 mph in the test section. The Jet Impingement Facility is designed to provide clean, low-turbulence flow into a variety of nozzle configurations that are used to study the fluid mechanics and heat transfer in impinging jets for electronics cooling. A companion Spray Cooling Rig is a special apparatus designed to investigate the fluid mechanics and heat transfer due to spray and droplet cooling. A specialized Mini/micro Channel Flow Loop is a specialized liquid flow loop designed to deliver metered, constant temperature flow rate for investigation of single and multi-phase heat transfer in small scale heat exchangers for electronics cooling. The laboratory has many other custom rigs for measuring thermal properties such as thermal conductivity and thermal diffusivity, and for measuring the thermal impedance of interface materials. Diagnostic and measurement tools include thermal and particle imaging velocimetry, infrared imaging, ultra-high speed video, and liquid crystal thermal visualization. In addition, the Villanova Steel ORCA Research Center for Digital Utilities will provide state-of-the-art facilities for research for data center energy efficiency, includeing research in IT, thermal management, controls, and energy technology. VSORC is a 10,000 sq. ft. facility located at the new Steel ORCA Data Center located in Monmouth Junction, just north of Princeton, New Jersey.
The Georgia Institute of Technology: The Microelectronics and Emerging Technologies Thermal Laboratory (METTL) houses fabrication and characterization facilities, as well as experimental rigs for the study of heat transfer and fluid flow phenomena from tens of nm to approximately m length scales. Characterization equipment includes infra-red microscopy, particle image velocimetry, high speed imaging, and temperature, pressure, and flow rate measurement capabilities over a broad range. Fabrication capabilities include wafer dicing, wire bonding, and nano-fabrication. Experiments at the rack level will be performed at the Consortium for Energy Efficient Thermal Management (CEETHERM) Data Center Laboratory which accommodates 28 computing racks arranged in a typical hot aisle cold aisle configuration.
University of Texas at Arlington
Department of Mechanical and Aerospace Engineering
Arlington, Texas, 76019
College of Engineering
800 Lancaster Avenue
Villanova, Pennsylvania, 19085
The Georgia Institute of Technology
The George W. Woodruff School of Mechanical Engineering
North Ave NW
Atlanta, Georgia, 30332