Social networking giant Facebook’s initial public offering earlier this month capped a remarkable story of online growth. It took Facebook four years to reach its first 100 million users, but now the site claims to have over 800 million active users. Behind that success lies an equally remarkable story, of how the company managed to expand its data centre infrastructure quickly enough to maintain the site’s usability in the face of such growth.
In October 2011, the company opened its first data centre outside of North America – in Lulea in Sweden. That data centre is most notable for the way in which it has been built from the ground up, not just to provide the power to comfortably accommodate the company’s expected billion users but to be as energy efficient and environmentally friendly as possible.
“Our goals in building our own data centres were simple: Make them as efficient as possible, remove everything extraneous, minimise our impact on the environment and be open about what we were doing,” says Tom Furlong, director of site operations at Facebook. “For example, Lulea is our first data centre to draw its power primarily from renewables, and it features design evolutions like a 70% reduction in our reliance on backup generators.”
Sweden’s climate is ideal for minimising power consumption, says Furlong. “The location provides an excellent climate that enables us to take full advantage of outside air for cooling, resulting in a significant reduction in energy consumption,” he says.
The location means that outside air can be used for cooling for between eight and ten months of the year, and exhaust heat from the servers is also captured and used to help heat the offices. On top of that, Facebook has adopted a proprietary uninterruptible power supply (UPS) technology – normally a big source of data centre waste – that reduces electricity usage by up to 12%.
Rain water that falls on the facility’s roof is redirected to flow into adjacent creeks to minimise the building’s impact on the environment, and the toilets are low-flush.
Facebook’s Lulea data centre follows on from a two-year, internal project intended to build the most efficient data centre possible from the ground up in Prineville, Oregon, in the US. That project included a 480-volt electrical distribution system to reduce energy loss and the removal of anything from the data centre’s servers that were not strictly required. That achieved a 38% reduction in energy usage compared to Facebook’s existing facilities without costing a penny more, helping to reduce overall costs by one-quarter.
Facebook is now sharing its experience at building data centres with the Open Computer Project, an initiative to spread best practice in data centre design.
It is sorely needed. According to research by Dr Jonathan Koomey, consulting professor at Stanford University, data centres are responsible for using an estimated 1.3% of total electricity output worldwide, while for the US Koomey puts the figure at about 2%. Furthermore, while the dot-com boom might have turned to bust in 2000, data centres continued to multiply, doubling electricity use between 2000 and 2005, while increasing by 56% between 2005 and 2010.
What is worse, says QuoCirca analyst Clive Longbottom, is that as much as nine- tenths of the energy used to power a data centre is wasted in various ways. “Only 40% of electricity coming into a data centre is used to run workloads,” he says. “The rest is ‘wasted’ through cooling, UPS and other secondary systems.”
Next>> NREL’s Campus of the Future
These factors are driving organisations to go to ever more extreme lengths to cut data centre energy consumption and carbon emissions.
Another example is the data centre on the US National Renewable Energy Laboratory’s (NREL) ‘campus of the future’ in Golden, Colorado. The facility was built in a three-year, $4.9 million project after the organisation decided to scrap its existing data centre altogether.
Nine-tenths of the NREL’s legacy servers were replaced with Hewlett-Packard (HP) blade servers with variable-speed fans and efficient power supplies. Four-fifths of the old servers’ workloads were virtualised to radically improve efficiency – a 23-to-one ratio of virtual to physical machines, according to Gartner vice president and distinguished analyst, Jay Pultz, who has studied the project. “Only database and Microsoft Exchange servers were not virtualised,” he says.
“In terms of power, 20 one-rack unit servers, each drawing more than 300 watts, were replaced by one blade drawing 215 watts – a more than 10-to-one power reduction,” says Pultz. “With virtualisation, the average power consumed per virtual machine is less than 11 watts – nearly a 30- to-one power reduction.”
That combination of virtualisation with modern blade servers alone provided the lion’s share of the efficiency savings, but it was not everything.
NREL also made use of hot-aisle containment, in which IT equipment is laid out in such a way that the hot air it gives off can be reused or cooled more efficiently. At the NREL project, servers and storage are located in pods, and heat exhaust is captured via hot-aisle containment. This captured heat is then redistributed so that it can be used to heat the building. Furthermore, like Facebook’s data centre in Lulea in Sweden, NREL’s Golden facility also makes use of outside temperatures for internal cooling.
“The custom-built cooling system features a large external air intake, high- efficiency fans and air filtration – no chillers are needed,” says Pultz. This was combined with an increase in ambient temperature within the data centre – from 20º Celsius to 24º – to take advantage of modern equipment’s ability to operate efficiently at higher operating temperatures. Finally, the older, 80%- efficient UPS system was replaced with one capable of running at 97% efficiency.
Overall, says Pultz, the organisation has reduced power consumption in terms of watts per user from 217 to 42 watts, and achieved an annual reduction in carbon emissions of 2,250 metric tons. Annual energy costs would weigh in at $280,000, compared to an estimated $450,000 for a conventional data centre, had NREL not also installed a five-acre array of photovoltaic solar cells that cover the data centre’s daytime running needs.
In financial terms, the new facility is expected to save NREL around $3.5 million in total costs over 15 years. It has also saved space, occupying about 175 square metres compared to the 230 square metres its old data centre required.
AMD’s consolidated cloud
Installing a new data centre on a greenfield site, with both the financial and organisation backing to take every green measure available is one thing, but most organisations do not have that luxury.
Partly as a result of a string of acquisitions, microprocessor maker Advanced Micro Devices (AMD) had built up a collection of 18 data centres worldwide. Since 2009, it has undergone a consolidation programme to reduce that number to just three – two in the US and one in Malaysia.
In the process, it is refreshing its data centre estate from top to bottom with new servers and supporting infrastructure, says Farid Dana, director of IT, Global Infrastructure Services, at AMD. In the process, it is implementing HP for Cloud Data Center, including both networking switches and servers based on (of course) AMD Opteron 6200 series microprocessors.
The aim is to provide an internal cloud- based infrastructure in which users are not tied to one server or data centre, but allocated processing power wherever it is available, on demand.
“The three data centres work as one internal cloud grid,” explains Dana. “When the engineers submit a job to the queue, it gets executed where the cloud is available and the result gets sent back to the engineer. But the engineer doesn’t really know where it gets executed.”
So far, says Dana, it has achieved energy efficiency gains of between 30% and 40%. “What we used to run in a two megawatt (MW) environment, we can run in between 1.2-1.4MW,” he says.
Interesting Links
Data centre relocation: YouGov moves IT systems from Berlin to London
Achieving maximum energy efficiency has become even more important today due to uncertainty over future power costs, says Ian Brooks, European head of innovation and sustainable computing at HP. And this uncertainty has been reflected by the shorter power contracts that data centre operators have cut with the power companies.
“A lot of power contracts have historically been very long term,” says Brooks. “Companies used to sign multi-year contracts for power. At the moment, though, companies are reviewing whether that ties them to one provider for too long a period of time and, if the market is going to move within that time, whether the penalty [for breaking the contract] makes it difficult for them to move.”
Medway Council shares services
It is not just high-tech organisations that are looking more closely at their data centres’ energy consumption and all-round efficiency, and examining ways in which they can be better run.
Even among UK local authorities there is immense pressure to do more with less – especially in terms of budgets. “All local authorities have targets for reducing their carbon footprints and that needs to be taken into consideration as well – although for us cost is really the prime consideration,” says Moira Bragg, head of ICT at Medway Council.
Bragg is involved in an innovative IT and data centre consolidation project across the county with 14 other local authorities, as well as Kent Fire & Rescue and Kent Police. It has consolidated operations across the county in just two data centres, one at Gun Wharf in Chatham and the other at County Hall in Maidstone.
These include Kent County Council, which has located 143 file and print services at Gun Wharf, saving it £500,000 per year, according to an assessment by IBM’s Zodiac green consultancy, and reducing the carbon footprint associated with the activity by 90% in the process. Medway also increased the thermostat on the air conditioning within the data centres from 18º Celsius to 23º and installed energy efficient LED lighting, although the budget did not stretch to motion-activated light switches.
However, one particular shortcoming of the arrangement is the way in which power is paid for. In Medway Council, the power bill is negotiated and paid for centrally, so the user organisation cannot be charged on the basis of precise consumption. “When we look at the hosting agreements and the charging mechanism, we charge in terms of the maximum power that the rack is likely to consume,” says Bragg.
The fact that the Gun Wharf data centre is located in a council office building posed other challenges to cost reduction. For example, using the exhaust heat from the servers to heat office space in the building would have required the support and coordination of other departments, which would have slowed the implementation process, Braff explains.
The use of heat output is the main area that Bragg and Peter Good, infrastructure manager at the Council, might revisit were they to run the project again.
Interesting Links
How Nomura merged data centres when it acquired Lehman Brothers
“I feel that we could and should have explored the use of the output heat more. I think that it’s far too easy to say that we haven’t got the money or the time, but in hindsight it might have been more beneficial than people realised at the time,” says Good.
For example, if the data centre were located next to a leisure centre, it could be used to help heat the pool – saving tens of thousands of pounds for the local authority in the process.
Indeed, there are countless measures that can be take to save power and cut cost in data centre design. But as Facebook, NREL, AMD and Medway Council all demonstrate, to make the biggest savings requires plenty of planning and forethought before the project is started.