As business leaders are increasingly aware, the data centres that power their organisations’ IT-intensive processes are at breaking point. For a start, there is the physical challenge of obtaining sufficient data centre space. Many of the data centres in use today were built in the 1980s or 1990s, often within operational headquarters, and are being packed to capacity with new equipment as demand from business users for new IT services continues unabated. The problem is that these facilities were not designed to handle the requirements of densely packed, modern IT kit.
With an acute shortage of sites for new facilities and a shrinking amount of rentable data centre space – especially in London and other metropolitan areas – many organisations have undertaken the daunting task of retrofitting their existing data centres, often adding new electricity feeds, more powerful cooling systems and more efficient transformers.
Power pressures
The challenge of obtaining sufficient electricity to power the servers, storage systems, network switches and all the ancillary equipment that sits around them is particularly taxing. There is also the issue of dealing with the excessive heat that modern systems now generate – a factor that can trigger sluggish performance, machine shutdown or even complete failure if not managed diligently.
Against that backdrop, there are two underlying pressures on data centre managers: to keep control over costs and to work towards new requirements for greater energy efficiency.
Despite the huge impetus to bring about change, the city centre sites will remain an essential part of many IT operations, says Ken Robson, an infrastructure specialist at investment bank Lehman Brothers. That is because many business-critical applications, such as the banks’ trading platforms, demand sub-second latency – something that is impossible to achieve unless the servers are housed nearby.
But to ensure these facilities continue to function as the nerve centre of the enterprise, senior managers need to decide exactly what is business critical. Too often, data centres are filled up with “crap”, says Liam Newcombe, chairman of the data centre group at the British Computer Society (BCS). There is a whole range of applications that can be moved out of the data centre and into lower-cost environments. “There’s just no excuse for putting email servers in them,” he adds.
Such a line of thinking opens up the possibility of alternative ways to think about data centres: splitting them into distinct high-density and low-density categories.
Efficient targets
The high-density centres, sited in metropolitan areas, would be reserved for latency-sensitive applications. Inevitably, operating costs would be higher – nevertheless, much is being done to make these more efficient. Sophisticated dynamic cooling systems, such as those developed by the likes of Hewlett-Packard and APC, are enabling businesses to optimise the process of removing heat from the data centre by concentrating efforts at the rack level.
Further efficiency gains are possible in these high-end facilities through the greater use of virtualisation technology. The current generation of x86-based servers typically operate at around 20% utilisation, but this can be driven up towards 60% through virtualisation, says Stephen Nunn, head of data centre operations at IT consultancy Accenture. For a 200-server estate, that effectively means between 50 and 60 servers can be turned off, he says, dramatically lowering power use and, therefore, operational costs.
Outside metropolitan areas, the development of large-scale data centres has taken a new twist. Google, Microsoft and BT, among other giants, have all been building data centres in remote locations, where there is an abundance of readily available power and where environmental conditions permit the use of fresh-air cooling.
As Steve O’Donnell, global head of data centres at BT, explains, when space is no longer a constraint, racks need not be tightly packed. This makes fresh-air cooling viable, and at a stroke does away with many energy-draining cooling systems.
Other low-tech innovations can also help improve efficiencies within these low-density data centres, reports O’Donnell. BT has always strived to follow best practice, such as implementing alternating hot and cold aisles, within the data centre. This can be further enhanced by the addition of thermal devices as low-tech as curtains, he says. BT uses the type of long plastic sheets frequently found in food retailers’ meat freezers to amplify the demarcation between hot and cold aisles.
These greenfield sites present IT leaders with the opportunity to incorporate the very latest thinking about data centre design. Traditionally, having acquired a site, organisations have built their data centre to fill it, notes BCS’s Newcombe.
There are advantages to taking a more modular approach. IT is changing so rapidly that designing a building for 15 years hence is impossible. Instead, Newcombe proposes that data centres are built on a ‘crop rotation’ model, where the data centre is divided into four segments, each updated on a rotating basis. This approach means that upfront capital outlay is minimised and organisations can incorporate the latest technology, ensuring that their facilities remain fit for purpose.