Energy has become a critical issue for IT. As July’s Information Age cover story highlighted, a third to a half of all corporate energy consumption in the UK is now down to IT systems. And dealing with that energy – powering, cooling and paying for it – has precipitated a crisis in many data centres.
The subject certainly dominated the debate at two recent roundtable reader lunches on energy and data centre efficiency, with delegates regaling each other with stories of how close their data centres have come to disaster – all, of course, on an unreported basis.
It is clear that the problem is many faceted: some delegates point to high costs and overheating systems while others have struggled to find a suitable location.
As the strategic operations vice president at a large investment bank explained, there are a limited number data centres locations available – especially within or around London’s M25. Finding adequate electricity supply, a fat pipe to the Internet, good communications for staff and a site outside of flood planes or flight paths is nearly impossible, he said. “We started out in the City, then moved to the Isle of Dogs, now we’re way out west of London.”
This confluence of issues has catapulted the data centre on to the board’s agenda. As one delegate explained the data centre is no longer dismissed as “the shed we put our computers in”.
The infrastructure manager at a large multinational explained how the power problem had caught his company off-guard: during the last four years, high-performance servers have been constantly added to its 10-year old data centres, but nobody thought about power requirements. “It’s bitten us quite hard,” he said.
Such problems are indicative of the lack of co-ordination between the IT and facilities departments, said the data centre manager at a large utilities company. “It’s reaching the stage where the data centre manager needs both technical IT skills and the facilities knowledge – it could be that it becomes a single specialist role.”
So what solutions are being considered? For some, co-location can seem the obvious choice: “Currently, estimating what our power requirements might be in five or 10 years, is impossible,” explained a senior data centre executive at a large telecom provider. “If you outsource it, someone else can make that risk calculation.”
Today’s IT models have created vast server farms that are massively under-used – running at an average of 10% to 20% of their full capacity, according to some analysts. But while they sit there near-idle, these servers are still drawing plenty of power and demanding constant cooling. Virtualisation technologies and grid computing potentially offer a way to better distribute workloads.
Others hope to persuade server makers to develop cooler, less power-hungry machines. “We’ve got to look to the chip makers to help us out,” said the utilities company data centre manager.
Another possibility currently being investigated is the use of direct current (DC) in the data centre. This could potentially cut electricity costs by 20%, said the telecoms sector data centre manager, who is actively investigating the switch away from AC power Typically, AC power supplies are inefficient when converting the current to the various DC voltages required by individual servers. The question is, he said, whether “the masses of extra copper” required to carry direct current would be practical or cost effective.
Not one of the delegates suggested any kind of panacea that could resolve the data centre power challenge. And that meant one thing: making the business aware of the risks and getting it to commit to new infrastructure: “The final answer depends on us having some serious conversation with the business.”