But a heretical school of thought is emerging that in many cases resists the temptation to throw technology at the problem and suggests that the crisis will not abate until IT and business leaders dispel some of the myths that have grown up around data centre operations. The thinking is distinctly radical for a technology-loving industry. They favour fresh-air cooling, the elimination of power conversion, the use of thermal devices as low-tech as curtains, the enforcement of low-density environments for IT equipment and the acceptance of temperature swings that range from freezing to 50 degrees Centigrade.
While most data centre managers may take issue with these approaches, what no-one is dismissing is the pressing need to address the problem.
It is now widely recognised that the rate of growth in the corporate data centre’s requirement for power is simply insupportable. Already in many cases, the traditional capital costs associated with the high-tech equipment housed within data centres is being outstripped by the sheer cost of the electricity it consumes – the consumption is not just by the banks of servers, storage devices, comms equipment and other IT kit, but by the supporting equipment that is required for power conversion, uninterruptible power supply, air conditioning, humidity control and, increasingly, liquid cooling. As energy prices have risen and corporate awareness of environmental issues has grown, so too has the sense of urgency to tackle the problem.
Big lies
To date, most of the proposed solutions to the data centre energy crisis have focused on introducing new energy-efficient products and practices, notes Steve O’Donnell, global head of data centres at BT – and one of the most radical thinkers in this area. In fact, much of the inefficiency has become deeply ingrained in data centre design because of widespread misunderstandings over the running of these facilities, he adds.
He makes no apology for referring to the received wisdom of data centre design and operations as “the big lies”.
One of the most fundamental mistakes that businesses continue to make is over-specification in the design process. With data centres powering the critical processes of most of enterprises, that infrastructure needs to be highly resilient. But are today’s accepted designs for delivering ultra-high availability compatible with energy efficiency? Liam Newcombe, secretary of the BCS Data Centre Specialist Group, thinks not. Businesses have often unthinkingly opted for Tier 4 facilities (as specified in the widely accepted guidelines of the Uptime Institute), with resilience and redundancy built in at all levels to guard against the risk of an outage. Keeping rows and rows of equipment running just in case of a hiccup is inescapably inefficient, and only justifiable with certain applications. Frequently, this over-specification is unnecessary, says Newcombe.
Tier 4 facilities [the highest level, to ensure availability] should be reserved for only those classes of applications, such as financial trading platforms, where resilience is essential. “Stop filling them up with crap,” pleads Newcombe: other applications, such as email servers, should be hosted elsewhere in less rigorous – and more energy-efficient – environments.
BT challenges conventional data centre thinking
The big lies:
* Data centres need to run at between 20ºC and 24ºC
* External air is too dirty to use
* Humidity is a big concern
* Designs need to meet Tier 1-4 standards
* Density as high as 30kW/m2 is inevitable and good
* AC power is best and cheapest
* The most reliable data centre design is high-availability
The reality:
* Modern systems can operate efficiently and reliable at between 5ºC and 40ºC
* Fresh-air cooling is highly viable and effective
* Modern equipment is free of any real humidity constraints
* Mean time between failure can still be as high as 10,000 years without adhering to Tier 3 or 4 uptime standards
* DC power can cut energy consumption dramatically
There has also been a presumption that data centres have to be designed to support a narrow temperature band within which equipment will operate reliably and efficiently, and also that packing high-density computer equipment into smaller spaces is the modern and inevitable route for all enterprise IT. Again, O’Donnell says such ill-informed thinking is exacerbating current energy issues and ingraining inefficiency into the design of facilities.
As global head of BT’s data centre network, he has spent years wrestling with the company’s IT-related energy profile. BT runs 79 large-scale data centres around the globe, each with a footprint in excess of 5,000 square feet, with facilities in the UK housing some 50,000 servers alone. By its own reckoning, it is the largest consumer of electricity in the UK, sucking up 0.7% of the country’s entire supply.
Much of its understanding in this area is drawn from its long history of running telephone exchanges – it currently supports 5,500. These were essentially the precursors to data centres, but today they are filled with the same kind of computer equipment and switches as any sizeable computer room. But they are not data centres: “They are all low-density, fresh-air cooled sites, and we thought, ‘Why can’t we do the same with our computer rooms?’”, recounts O’Donnell.
Its research into temperature was an eye-opener. There is a common line of thought that suggests that the equipment housed in data centres only operates efficiently and reliably at ambient temperatures between 20ºC and 24ºC. Not so, says O’Donnell: “Intel’s microprocessors will run at 100ºC forever; Google’s data centres run at 40ºC-plus.” In fact, Google found no correlation between the operating temperature of disks and longevity or reliability, with a data centre operating as well at 20ºC as one baking its contents at 40ºC.
Data centres can – and should – be run over much greater temperature ranges, says O’Donnell: “All modern systems will operate effectively and reliably between 5ºC and 40ºC.” BT runs its ‘own-use’ systems at between 0ºC and 50ºC, he adds.
Dense thinking
Once data centre managers are freed from the shackles of temperature control, they can address another major cause of energy waste: cooling. A rough breakdown of how the energy used in a typical data centre was outlined by the US Department of Energy in a recent study: roughly 20% goes into power conversion, 40% into cooling equipment and 40% into server load and computing operations. The cooling component is so high because most data centres will not use outside air – believing it to be too contaminated with dust, humidity or some other aspect that would interfere with the machines.
But that is another myth, says O’Donnell. Fresh-air cooling has dramatic potential. External temperatures in the UK are low enough for fresh-air cooling to be used for 90% of the year, he says.
However, he warns, fresh-air cooling is not suited to today’s high-density systems where 30 kilowatts (kW) of systems might be packed into a square metre. With blade servers packed tightly into racks, specialised cooling systems – from tiny, high-power fans to water chillers – have been necessary to neutralise the emerging hotspots.
The hotspot problem doesn’t arise in low-density environments, notes O’Donnell, and fresh air cooling is usually sufficient. He recommends staying well clear of 20kW.
That combination of low density and fresh air can take out as much as 85% of the refrigeration costs, he says. High-density data centres only make sense “when in a submarine or the NatWest Tower”, where the cost of space is sky-high, he says. Otherwise, he adds, companies should build big, low-density data centres.
Nevertheless, fresh-air cooling is not without its challenges, notes Neil Rasmussen, chief technology officer and co-founder of data centre infrastructure maker APC. “It’s not simply a case of opening up all the windows or drilling a big air vent,” he says. Customising facilities so that enough air can get to the right places is tricky in existing buildings. Even when it is achieved, “lots of fan power” will be needed to ensure the air circulates sufficiently, he points out.
Despite these reservations, Rasmussen remains a strong advocate for rethinking data centre strategies. Today, many data centre managers find themselves in the ludicrous position of having to employ energy-sapping humidifiers to eliminate the risk of static build-up; that risk arises because the air-conditioner removes the moisture from the environment. “It’s like you’re powering systems to fight each other,” he notes.
But while there are some areas of accord between BT’s O’Donnell and APC’s Rasmussen on the power-saving benefits of fresh-air cooling, on other mechanisms for reducing energy waste in the data centre they are poles apart.
The DC difference
O’Donnell has conducted a painstaking hunt for areas where energy waste can be minimised. This confirmed his suspicion that the industry addiction to high availability was introducing slack practices.
A prime example of this has been the proliferation of uninterruptible power supplies (UPS), he says, which has introduced energy waste as a result of the necessary power conversions back and forth between alternating current (AC) and direct current (DC) – two or three conversions are not uncommon before the power reaches a DC-running server. Each time, depending on the efficiency of the UPS system, a proportion of the available electricity is lost.
To avoid such conversions, BT is switching the bulk of its data centres to DC power. And that, combined with the removal of many UPS systems, has cut electricity use by 15%.
It can cut out many UPS by building in reliability through the running of applications across multiple data centres. So, while each individual centre uses less electricity than a standard Tier 3 facility, their overall reliability is higher than a Tier 4 facility.
In contrast, Rasmussen is no fan of direct current. The power loss from UPS systems is overstated, he insists. “Maybe it was true 10 years ago, but our transformers are 96% efficient these days.”
And if Rasmussen doesn’t believe the power-loss case is proven, the argument for DC-powered facilities is even more tenuous. High-voltage DC facilities have been regarded as undesirable, but even a data centre designed to run on a 48 volt DC supply (a standard for equipment in the telecommunications industry) would need copper wiring “as thick as an arm”, says Rasmussen.
Given the current level of copper prices, wiring that thick is simply too costly – not to mention the difficulties of physically accommodating wiring that weighs tonnes, he insists. “The whole [DC] thing is misinformation,” he says.
One of the most widely quoted studies of the use of direct current in data centres was undertaken by the US Department of Energy’s Lawrence Berkeley National Laboratory. It reported that use of direct current throughout the data centre could reduce the amount of energy needed to run a data centre by as much as 20%.
A closer reading of the report suggests that the true figure is probably only 7% efficiency savings, notes Rasmussen, a level that would quickly be wiped out by copper costs.
Such vigorous debate is indicative of the difficulties facing data centre managers – and it can only get hotter, as the two certainties of business hunger for compute power and energy prices are on the same steep upward trajectory. To make that sustainable, a lot has to change.
High-density blasts
Not so very long ago, the IT manager of a large British financial services company was struggling to convince his management board of the serious problem of heat within the corporate data centre. To persuade them of the strength of his argument, he invited the entire group to join him in the data centre and to stand behind the server-packed racks. The furnace-like blast of air brought home the point.
Others may not need such first-hand experience to appreciate the conditions in many of today’s data centres, but the tale is emblematic of the impact that high-density computing is having on IT.
The introduction of blade servers has enabled organisations to wring out ever-greater computer power from existing data centre facilities. However, the impact of this on energy consumption has been stark: Ken Brill, an executive director of IT think-tank The Uptime Institute, estimates that “The same dollar spending on new servers today embeds two to four times more power consumption in the same (or less) space than was required by the equipment being replaced.”
Despite the impact of energy consumption, demand for high-density servers remains strong. Analyst group Gartner says sales of such servers are rising, with a 15% compound annual growth rate.
Herein lies one of the most contentious issues in IT today: for some, such as BT’s global head of data centres, Steve O’Donnell, this is proof of IT’s adherence to misguided thinking – high-density systems demand powerful cooling systems, thereby compounding the problems of energy consumption. Others, such as Pat Gelsinger, senior VP at Intel’s digital enterprise group, argue that high-density servers can actually make data centres more efficient.
Modern servers are delivering dramatically improved processing power/energy consumption ratios compared to legacy equipment, ensuring that IT can deliver the necessary applications while actually using less power, Gelsinger adds.
Indeed, because blade systems share many of the components traditionally housed in individual servers, they can, under some circumstances, be more energy efficient than low-density systems, says Neil Rasmussen of APC.
Further reading in Information Age
More articles can be found in the Data Centres Briefing Room