One of Texan billionaire Michael Dell's favourite mottos is "EFP" – every freaking penny – and it is religiously observed at a company which has built its global reputation on delivering high value products at economic prices. It may make life hot for Dell executives like global VP of marketing Paul Gottsegen, but it means that the company's customers need not sweat about the size of their desktop procurement bills or, in future, their data centre management costs.
Indeed, according to Gottsegen, the same commodity Intel technology that Dell has traditionally deployed to deliver the best bang-for-buck desktop performance is now ready to deliver similar benefits in the data centre. Although companies are still investing in big computers to run large jobs, the hearts of these machines are no longer built around expensive proprietary technology: "If you peel back the onion layers, there is a noticeable trend towards utilizing very low-end computers for high performance projects," says Gottsegen.
|
||
Building systems this way not only saves money on parts, it also introduces a new element of simplicity to a domain that has become increasingly complex over the years – and complexity means cost and inflexibility. This is why vendors that have persisted with squeezing incremental performance hikes from expensive, proprietary 8-way machines, are frequently beaten to market by vendors like Dell that can now deliver super-computer performance at commodity price points from systems built from "industry standard" components.
This trend is not restricted to the server market either, says Gottsegen. Storage economics are also being transformed as vendors move away from Unix to systems that run Windows and Linux, bringing robust storage area network environments within the reach of SMBs for the first time.
Today, perhaps the only clear advantage that proprietary systems vendors have over their standards-based rivals is the ability to weave products together within a sophisticated, increasingly automated management regime. But Gottsegen argues that even this advantage is quickly eroding.
Standards bodies such as Distributed Management Task Force (DMTL), with its recently released systems management architecture for server hardware (SMASH) specification promise to deliver a unified management view of a heterogeneous world – allowing customers to mix and match infrastructure systems from multiple vendors, and still manage them all from a single command line.
Evolving the data centre towards this industry standard approach is something of which Dell has direct experience. It recently transferred its order processing system at its local European headquarters from proprietary Unix to its own servers. It has built its website dell.com on multiple clusters of two-way servers in order to process the sales transactions of the 130 million desktops it sells every year.
"We don't expect every mainframe in the world to be unplugged tomorrow," says Gottsegen, adding that a lot of legacy applications will be around for a very long time. Customers are increasingly looking towards new application development as the source of competitive advantage and development. "What they need is the confidence to step into this brave new world if they haven't ever before deployed an application on industry standards," says Gottsegen.
At Dell, the future of the data centre resides in mobilising smaller, cheaper servers, and tying them together to service mission-critical applications. In the industry standards world, competitive pricing and performance is open to all, says Gottsegen, but its imperative that vendors work together to ensure seamless integration.