Paul Otellini, Intel’s president and CEO, doesn’t think so: “The advent of new operating systems, more lifelike games, online video and high definition video continues to drive the need for more processing power.”
But lifelike games and high definition video are not pressing concerns for most IT decision makers. Here, users report more immediate worries, such as the cost of powering data centres, and the expense of pumping up the air-conditioning to a high enough level to remove the excess heat being generated.
Breaking this power and heat deadlock has become a crucial element of the new dynamics operating in the server market. And vendors, such as Sun Microsystems, are fast realising that servers with ‘green credentials’ have a strong economic advantage: its UltraSPARC T1, for example – a server that uses up to eight cores in its processor but produces less heat than a standard processor – is the fastest growing product in the history of the company.
“In general, even when designing x86 [server] products using AMD Opteron chips, we design custom power supplies, fans and fan controls around them,” says David Douglas, vice president for eco responsibility at Sun. And a lot of the design is about driving utilisation up on the systems because the fans will always be spinning when the power is on, he says.
Many of the biggest changes in server design – and those which have a direct environmental impact – are taking place at the processor level. “In last 20 years the modus operandi was to add more transistors and more megahertz to achieve greater performance,” says John Fruehe, worldwide business development manager for Opteron at AMD. “But multiple-core processing is changing that. There is now a shift from adding brute force [to the chip] to one of enabling increased efficiency.”
The efficiency from using multi- or dual-core processors is brought about by two factors: it runs at a lower clock speed than equivalent single-core processors, thereby producing less heat and requiring less power; but at the same time the chip’s performance is doubled.
This allows customers to build new or expand existing data centres knowing that an increase in performance will not require an increase in power, says Fruehe. The move towards quad-core processing, with both Intel and AMD set to release new chips, will enhance these efficiency gains even further.
Virtual Gains
Such is the sea-change in the industry that efficiency has replaced price as the top requirement when selecting new servers, says Peter Critchley, strategy director at technology consultancy group Morse. “What organisations are looking for is effectiveness and efficiency from the server estate,” he says.
One area where these efficiency gains are being found, particularly in the x86 server market, is happening in the area of virtualisation, where low levels of server utilisation – typically in the range of only 5% to 20% – are prompting organisations to look for improvements via virtualisation.
“There is now a shift from adding brute force to server power to one of enabling increased efficiency.”
John Fruehe, AMD
Virtualisation delivers benefits by creating multiple logical servers on one physical machine. This allows businesses to use available server capacity more efficiently by having one server host multiple operating environments, such as Windows or Linux. And it aids high availability and disaster recovery functions by eliminating the need to host identical hardware and software at backup sites.
But the trend towards virtualisation is still in its infancy. “The virtualisation market is exciting, but we are still at the early stages of getting to grips with the opportunity that it actually provides,” says Paul Thomalla, European vice president at server company Stratus. “In reality, I do not see huge virtual farms being enacted,” he says.
According to analysts at Forrester Research, one third of the top global 2000 firms have implemented varying degrees of server virtualisation technologies, while 13% have plans to pilot virtualisation within the coming year. But amongst smaller organisations, there is typically less awareness or adoption of virtualisation technologies than their larger counterparts.
At the software level, VMware, a virtual infrastructure technology group, is the outright leader. Server virtualisation has long been a feature of specialised Unix-based servers that support partitioning, says Frank Gillett, an analyst at Forrester Research. “But VMware brought server virtualisation to the vast majority of servers that run x86 processors from AMD and Intel.” Microsoft has followed suit and now offers Virtual Server 2005, as does XenSource, which leads and supports the Xen open source server virtualisation project, he adds.
The trend towards virtualisation is strongly linked with the move towards multi-core processing, and revenue growth at many vendors has been tempered as x86 server virtualisation technology begins to take hold (see table). But, the server market is being kept buoyant by a strong growth in blade technologies, and both x86 servers and x86 blade servers continue to be “the systems of choice for growing the front and middle tiers of the Web infrastructure,” says Jeffrey Hewitt, analyst at market watcher Gartner.
The Blade Revolution
Increasingly blade servers are being viewed as an alternative to traditional enterprise servers, because they offer data centre managers an easy method of adding additional capacity. This was to prove vital at marketing services company Dunnhumby, which is best known for its work on supermarket Tesco’s loyalty card.
“Our [data] facility was built in blades because the nature of our business is very unpredictable,” says Andrew Jordan, group data solutions director at Dunnhumby. “You cannot build a huge environment just in case these peaks arrive – we would be sitting with 70% to 80% unused [server] capacity. With blades you can very incrementally take on extra capacity as and when you need it.”
But blades are not without problems. The density of blades being packed into racks has placed greater demands on getting enough power into the server racks, and then dispersing the heat that is subsequently generated. And while they do support many applications, blades for large databases or business intelligence applications are virtually non-existent.
At Dunnhumby, for example, the limitations of blades has also become apparent. “[Blade technology] help gets us to a certain place,” says Jordan. “But it doesn’t get us to where I want to be, which is to make [Dunnhumby] a true capacity on-demand environment.”
Jordan adds that in moving towards this capacity on-demand model, the ‘cheapness’ of technology is no longer a deciding factor in the purchasing decision. “If somebody said to me you can have this server cheap for $6 million or one that matches your business model for $10 million, I would always go for the latter,” he says.
His comment reflects how radically the dynamics in the server market have changed. The simple concept of ‘bang for bucks’ is no longer relevant. Furthermore, a continuing adherence to Moore’s Law will soon result in transistors that are so tiny that practical physical limits will have been reached; a replacement to silicon would be needed.
Once computing enters the atomic level, and quantum physics becomes a factor, all bets are off for how performance will be linked to price.
Slugging it out
The microprocessor lies at the core of any server’s architecture, and the market is dominated by two major manufacturers: Intel and AMD. Intel has traditionally been the outright market leader – until three years ago, it held almost 100% of the lucrative x86 server chip market.
Today that figure has changed significantly. AMD is no longer regarded as Intel’s distant rival: it has clawed back 26% of the server market in just three years and its high-end Opteron chips are now found as standard on Dell, Hewlett-Packard and Sun’s enterprise servers.
AMD, however, still has a long way to go before it can match Intel’s daunting size. Its sales for fiscal year 2005 totalled $5.85 billion; Intel’s sales of $38.8 billion were almost seven times that figure; similarly, net income was $165 million for AMD compared to Intel’s $8.7 billion.
Given these vast resources, Intel’s research and development budget is currently five times as much as AMD’s $1.14 billion. This has helped Intel to get to market first with a new “quad core” processor – the Core2 Extreme – which should start shipping for servers and high-end gaming PCs in November 2006.
But behind the pizzazz of faster, more powerful chips, an intense and bitter battle is underway – one that Hector Ruiz, CEO of AMD, labels a “David and Goliath battle”. The outcome will be critical for both AMD and the “long term health of our economy”, he claims.
Ruiz is pushing regulators to take Intel to task for what he sees an abuse of its monopoly power. Ruiz believes Intel offered rebates and payments to computer manufacturers to ensure exclusive use of Intel chips when AMD was reaching a crucial point that threatened to break Intel’s natural monopoly.
Intel’s executives, for their part, have strenuously denied these allegations. At the time of writing, the European Commission is investigating these claims.
However, legal disputes aside, AMD has been gaining ground on Intel in the last few years. In May 2006, computer maker Dell – which built its low-cost business on shipping PCs powered by Intel’s processors – agreed to start incorporating AMD’s chips.
And the once-dominant Intel is beginning to show scars from the battle: a 56% fall in Intel’s second-quarter profits for 2006, forced Intel’s CEO Paul Otellini to announce a radical restructuring plan that will see 10,000 jobs disappear by July 2007 in an effort to save upwards of $3billion.
However, Intel is far from on the ropes. In addition to the Core2 Extreme announcement, its recently released dual-core Xeon server chips are selling fast; it has shipped over one million since its release in October 2005.
And Intel finally shipped its long-delayed dual-core Itanium chip, codenamed Montecito, in July 2006, which looks set to put the poor performance and incompatibility issues of its predecessor in the past. The first servers to incorporate the new chips began shipping in September 2006.