Fluctuations in demand are a fact of life for any business, but one that has expensive consequences.
Most business’s data centres are designed to cope with the peaks of processing workload, and that means that for the long stretches when demand is low, much of that infrastructure is under-utilised. For the past half decade, industry pundits have pointed to a solution to that inefficiency – utility computing – that would provide organisations with access to levels of computing power that match their variable needs and which are charged
on a usage-based model.
For some, that is a distant promise; but for others, such as Loyalty Management Services (LMS), the operator of the Nectar and other loyalty card schemes, utility computing is already key to its service delivery, since the transfer of a large part of its business-critical infrastructure to service provider Savvis.
Fiachra Woodman
Fiachra has been the IT director at Loyalty Management Services since 2003. Prior to this, he held a wide variety of project and operational management positions, working for companies such as Gartmore, Hendersons, Cogent, AMP and Pearl Assurance. He is also a lecturer in IT project management methodology.
As Fiachra Woodman, head of data centre transformation at LMS, says, the decision to move from its traditional infrastructure to a third-party, enterprise-class service was a tough decision. But the ordeal of launching the hugely-popular Nectar card certainly made it a compelling one.
Massive early demand caused the in-house systems to buckle, and Woodman had no way of scaling to meet demand; as a result, the high profile launch was soon followed by some equally high-profile criticism. “We decided that we could never allow that to happen again,” says Woodman.
In examining the ‘economics of redundancy’, Woodman began to question some long-held assumptions: “We [originally] built redundancy into our architecture by buying extra servers.” It’s a premise that most IT organisations are willing to accept, but LMS questioned whether it should “always buy extra”.
Woodman began to explore the possibility of utility computing, settling on a service from Savvis that allowed it to hand over responsibility for its web server, application server and database server. This instantly reduced the cost of redundancy because of the type of blade server Savvis was using. “You don’t need redundancy in every tier,” he explains, as blades can be quickly provisioned for whichever tier they are needed.
LMS is also able to add additional capacity to meet seasonal demands and, importantly, reduce its requirements in slack periods. “That has really improved my relationship with the CFO,” says Woodman. That has other benefits too, such as greater flexibility: Woodman says LMS can now easily trial new marketing campaigns, without the overhead of upfront investment in infrastructure.
The move to a “real-time infrastructure” has given LMS deeper insight into its existing business processes. LMS uses software agents throughout its infrastructure to observe transaction performance, ensuring that critical transactions have suitable capacity. This level of granularity gives IT the “data to engage with the business, telling them stuff they can relate to,” he says.
Having started by moving parts of its web-facing infrastructure over to the utility model, LMS plans to continue that migration to the point where its configuration management will be dynamic. Currently this is a static process, but Woodman hopes that by 2008, LMS will be able to use application performance to manage its capacity requirements, so `it really does pay only for the computing resources it needs.
But he acknowledges that utility computing may not yet be suitable for all organisations. LMS has a large proportion of bespoke applications, relieving
it from worries over software licensing terms, which could otherwise be pro-hibitive under a utility model. “Some vendors need to come into line [on their licensing terms]. Until they do, the utility model won’t come into force.”