Saving energy in the datacentre - a leap of faith?
29-02-2012 - John Hatcher
The global trend to become green and sustainable is a significant driver for all major corporations, with dedicated energy and sustainability managers employed to reduce the energy usage and carbon footprints of their buildings. Green initiatives and energy conservation is becoming an integral part of the company mission, portfolio and operations. Many leading businesses today place environmental issues at the top of their business plans and green building associations are growing in prominence.
Data centres are among the world’s largest users of electricity, with servers running 24 hours a day, seven days a week, under tightly controlled environmental conditions and often not at full capacity. Many publications show that data centres are now a major consumer of the world’s energy – an estimated 2% of all power consumption is associated with data centres – and this will only increase with the growth of IT systems and rise in internet traffic.
The Situation Today
The air conditioning systems in data centres are different from the systems used for traditional buildings. In a data centre, the principal requirement is to maintain the indoor environment for IT equipment within specific parameters or ‘envelopes’ as opposed to other building types where the human need for comfort and well-being plays the major role. Any malfunction of the cooling systems (or the power supply) will quickly result in critical temperatures for the IT hardware and cause failures and downtime, therefore a high level of redundancy is always incorporated into data centre design.
However, the fundamental strategies used for controlling temperature and humidity of occupants or for clinical environments are no different than controlling the environment for servers and IT equipment. The selection of cooling and ventilation equipment for a data centre is based upon differences between the indoor (IT dominated) environment and a combination of outdoor conditions and additional heat loads.
Data centres were originally designed with optimized security and redundancy in mind – not energy efficiency – so the over-specification of plant and equipment is a common problem today:
- Maintaining low temperature and humidity setpoints for the server rooms
- Equipment selection based upon extreme outside conditions (which occur only a few days of the year)
- Excess air volumes supplied to the space 24/7, which poses problems for accurate temperature control and also for fire detection
- Poor control strategies which do not adapt to the building usage and the changing loads
- No consideration at design stage for future capacities, which can affect the design of the raised floor and the overall cooling performance.
This over-sizing phenomenon is typical in many data centre projects, and affects not only the operational cost but also the capital expenditure for the HVAC plant and all associated services (e.g. raised floor spaces, piping and ductwork). In a typical office building, savings in cooling energy of between 3% - 6% are possible for every additional 1° C rise in the space temperature setpoint . The current average space temperature of most data centres is between 22 and 23 °C.
The efficiency of the cooling equipment and air flows plays a pivotal role in maintaining the performance, reliability and uptime of the IT equipment within the data centre. In addition to cooling, the humidity levels are also an important factor with excessive exposure to high humidity causing problems with printed circuit boards, condensation problems and even corrosion.
Today there are several new published guidelines which expand the allowable operating conditions of data centres, including the excellent Thermal Guidelines 2011 from ASHRAE TC9.9 which defines new operating envelopes for different data centre classes with "recommended" and “allowable” parameters for temperature and humidity in the server rooms. These expanded guidelines not only reduce the cooling costs by virtue of the higher setpoints, but also increase the potential for free-cooling from the outside air (economizers).
Optimizing your Data Centre
Today’s advanced building management and control systems (BMS) offer huge untapped potential for the optimization of building services equipment and overall energy saving in a data centre, particularly when combined with proven cooling algorithms, heat recovery strategies and precise control of the indoor IT environment according to actual demand.
Today’s distributed BMS systems allow information from the entire data centre to be collected – from power meters, from HVAC equipment to actual temperature mapping of zones – to calculate real-time cooling demand. Advanced BMS systems provide user friendly engineering tools, which allow data centre operators to analyze their building performance in real-time to improve power usage effectiveness (PUE) and to create customized reports for continuous efficiency improvements.
One such tool is the Economiser tx2 – which is a patented Siemens HVAC algorithm for the control of full air conditioning systems. It searches for the best possible space setpoint at the border of the desired data centre ‘envelope’ based upon the state of the air, preconditioned by the energy recovery system, and the actual value of the server room. This proven solution reduces costs for air handling plants to allow even more efficient energy recovery than is achieved by conventional HVAC solutions.
The Economiser tx2 algorithm is designed to prioritize the use of energy recovery units (free cooling) to maintain data centre setpoints within an allowable field (see fig. 2). When free cooling is not sufficient to maintain server room conditions, the most economical source of energy (heating, cooling, humidification and/or dehumidification) will be selected to return space conditions back within the allowable field. An independent room setpoint for both temperature and relative humidity, each with tolerances, represent the allowable envelope for the algorithm as defined by the new 2011 Thermal Guidelines (the field can also be limited by a maximum absolute humidity).
Optimal data centre conditions will prevail not only at particular setpoints for server room temperature and humidity, but also within certain tolerances on either side of these setpoints. For example: a temperature setpoint of 23.5°C and a relative humidity set-point of 50% with tolerances of ±8.5°C and ±30% give the range for ‘allowable’ class A1.
In addition, it is sensible to limit the absolute humidity at high temperatures to prevent moisture affecting the IT equipment. This limit value is typically about 11g/kg (the 12 g/kg absolute humidity is equal to the maximum dewpoint of 17°C permissible to class A1). The wider the envelope between heating and cooling setpoints; the higher the energy savings will be. The same applies to the humidity control loop.
Other Benefits of BMS
Another advantage of the Building Management System is the availability of data. Nowadays, the psychrometric chart (h-x diagram) used at initial design stage, can be provided graphically at the BMS management station and is extremely useful for dynamic visualization of data centre conditioning processes.
The creation of multiple trend logs, allows for efficient commissioning of the data centre during construction, for fine-tuning of the control and to aid troubleshooting.
Recommended variables for trending in a data centre could be as follows:
- Supply air temperature and relative humidity to the server room
- Extract air temperature and relative humidity from the server room
- Selected server room space temperatures (mapped)
- Room temperature setpoints for heating and cooling
- Room relative humidity setpoint (for humidification and dehumidification)
- Outside air temperature and relative humidity
- Air quality values (CO2, VOC)
- Calculated absolute humidity
Any logged data allows data centre operators to analyze the performance of the air conditioning plants over a pre-set period compared with server reliability, and if required, adjust the control settings for heating, cooling, dehumidification and heat recovery systems. Once the control is stable, applications such as the Economiser tx2 can be further adjusted within the allowable envelopes to reduce energy consumption based upon energy cost factors:
- Expand the operation to higher data centre classes wherever possible
- Adjust the space cooling and de-humidification setpoints as high as possible
- Modify settings in the scheduler to effective data centre load
- Add additional room temperature and humidity sensors to monitor air distribution, hot spots and problem zones
- Ensure main air plants and local room air conditioners are only enabled when demand really exists.
Server reliability and energy savings are not mutually exclusive
Today’s Thermal Guidelines give data centre designers and operators the chance to select equipment based upon actual data centre classes and move to ‘compressor-less’ cooling in many locations around the world.
With the technology available today, energy saving in a data centre is not a leap of faith at the expense of server reliability. In the same way that energy-saving in traditional buildings is never made at the expense of comfort levels for occupants.
For more information, visit Siemens