The evolution of the IT function over the last five years has been anything but smooth. Since Y2K and the dot com demise, there has been intense pressure on IT directors to take cost out of their operations. And those pressures have prompted many to embark on a major overhaul of their infrastructure.
There has been a clear starting point, says Dave Pearson, Grid programme director for Oracle in the EMEA region. "The chief characteristic of most data centres is that they are structured as silos.
Typically that means hundreds, if not thousands of servers, commonly from different vendors and running different operating systems, dedicated to specific applications. That bundle of software and hardware has come out of a specific business unit's budget and has been handed over to internal IT or an outside provider to manage to a defined service level."
That situation encourages over-capacity, he says. "You tend to buy more processor power and storage than you need in the anticipation of growth, and if you are looking for high levels of availability and resilience then typically you have a standby system."
As a result, most organisations would be proud to show 60% utilisation rates for these systems, Pearson says. Yet he knows customers where utilisation, after initial deployment, runs at a mere 2%.
|
||
To address such issues, organisations have employed different degrees of rationalisation and technology innovation.
Organisations have been centralising their data centres and re-hosting multiple applications on larger servers and even mainframes. But, as Pearson suggests, that approach to consolidation has some drawbacks: it has scaling limitations and it creates a single point of failure that again requires expensive excess capacity.
The solution he advocates is to consolidate onto a pool of low-cost commodity servers and to use virtualisation software to provision applications and workload across that ‘grid'. "You can exploit grid to make use of your existing assets in a very controlled manner and you can do so in a way that does not compromise your quality of service," he argues. That means organisations do not need as much capacity, he says, highlighting how the clustering capability in Oracle's 10g product enables users to share multiple applications across those resources.
"The key thing with grid is that you are decoupling this fixed link between an application and a server. Once you have done that you find you can make more resources available as your demand increases and you can also get greater resilience because clustering and other grid technologies provide greater levels of failover," he explains.
But some hurdles stand in the way of that kind of architecture. One is business culture. Existing budgeting processes centre on departmental IT spend, so many units believe they own a particular configuration. "They may only use 5% of system capacity most days, but the owners are happy for it to stand semi-idle because once a month they will need 95% of its capacity," he observes.
Changing those attitudes will come by demonstrating that by pooling and sharing resources, users will get equal quality of service and are guaranteed the resources they need. "In fact you can show improved service because of the increased resilience," says Pearson.
Another issue is charge-back. If no one ‘owns' the systems, how do you charge for applications that run across multiple boxes for varying amounts of time? "It's a very challenging problem and one that has not been solved yet," he admits.
Pearson thinks IT managers should abandon the ‘do-more-with-less' mindset. "The opportunity now is about what new things can be done through grid and service oriented architectures. They really offer the prospect that has been held up by IT departments – and by IT vendors – for the last 30 years: that IT can really add major value to business."