As business demand for processor cycles has soared, more and more servers are being crammed into data centres that were never designed to handle their voracious appetite for power and the vast amounts of heat that they create.
“If you buy a lot of blade servers, you also buy a lot of electricity,” says Peter Hannaford, director of business development at data centre environment specialist APC. “And if you need a lot of electricity, you get some problems.”
But the problems are not just with the machines themselves. “We’re still designing data centres the way we’ve designed them for 30 years and that needs to change,” says Hannaford.
Peter Hannaford
A former IT director and head of companies providing turnkey solutions to IT-dependent organisations, Peter Hannaford has broad expertise on data centres infrastructure. As director of business development in EMEA at APC’s Availability Enhancement Group, he is currently responsible for the European rolling out of APC’s InfraStruXure data centre environment architecture.
One example he gives is power factor correction. Most data centre products have had built-in components that prevent neutral currents and harmonics since regulators demanded it in the early 1990s. But many data centres are still designed with power factor correction in mind – 15 years later.
Similarly, preventive maintenance and testing was proven to cause more problems than it solved back in the late 1980s, but still features in many designs. “Typically, the learning cycle is over 10 years,” says Hannaford. “This severely limits the ability to adopt new infrastructure technology.”
The fact is that the technologies they are housing are changing at a much more rapid pace than data centre design expertise. Power density has increased tenfold in four years, while rack weight has doubled in the same period. “A 10 to 15 year learning pattern is not compatible with survival,” says Hannaford.
He blames this on the “untenable” complexity of contemporary data centre design. When buying a car, most people are happy to walk into a showroom and debate the inclusion of a few dozen extra features. Cars are highly engineered and complex but are sold in standardised packages, so why can’t the same apply to data centres?
“There are 12,000 data elements in a modern data centre, but each time we start the design with a clean sheet of paper,” says Hannaford. “It all has to be integrated, documented, verified and tested. We need to eliminate this one-time engineering.”
Defining density
However, being able to test and preintegrate racks of servers in a factory demands standardisation and modular-isation, so that when components are purchased, buyers can predict their requirements for power, cooling, management and logistics.
Hannaford believes that current ways of specifying data centre power density are deeply flawed. “The conventional watts-per-square meter representation is ambiguous and misleading,” he says, and unable to handle today’s higher density loads or the need for variable target power densities.
On the other hand, calibrating power consumption by rack is too specific, as it can vary according to the rack’s position in the room and takes no account of particular areas of high density. “We need to support the incremental deployment of unknown densities,” he says. “We don’t know what’s coming down the line.” So APC believes that specifying power consumption by row is the best standard by which to design a data centre.
There is a similar paucity of standards for measuring or benchmarking the success – or otherwise – of any design. Hannaford believes that a good gauge of data centre efficiency is the amount of power used by systems as a proportion of the total power input into a data centre.
Managers can then see just how much energy is being used by the computing equipment itself, and how much by associated ‘environmental’ systems for cooling, humidifying, managing power supply, and so on.
What organisations typically find is that only around 30% the electricity they consume within the data centre is used to power the IT systems – the bulk of the remaining 70% is used simply to keep them from melting.
The big challenge for data centres of the future: to reverse that ratio.