In recent years, ‘agility’ has been a watchword for the forward-looking CIO. Almost every manager in IT has, at some point, had to explain to a business manager why the systems they have expensively installed will limit, rather than open up, new opportunities. And almost every CEO has, at some point, been forced to delay a new business venture while new IT systems are introduced, tested and scaled up.
Service-oriented architecture (SOA) forms a large part of the IT industry’s long-running, in-depth, and much-vaunted response to this. Applications have been recast as services, which can be plugged in and interleaved to create powerful new capabilities and automated processes without redesigning or destabilising the entire software applications stack.
But SOA alone is just the most visible part of the story. Flexible applications need to be supported by an underlying, flexible platform – a fabric of intelligently-controlled computing devices that can be managed, or renewed, or extended as the applications demand. In the past, this was crudely known as the hardware layer, but today, with its responsive network of tightly-managed servers, storage and network devices, it is being called a ‘service-oriented infrastructure (SOI) or a real-time infrastructure (RTI).
The SOI must ensure that if a new service is needed, or one needs changing, processing power and storage is in place to support it and that quality of service, availability, compliance, reliability and security are assured. While it has always been the job of data centre or infrastructure managers to ensure that systems are optimised, balanced, and efficiently deployed, increasingly the infrastructure layer will have to be managed for optimum use of space and even use of electricity.
Hardware management has always involved an uneasy mix of advanced software tools – mostly for monitoring and load balancing – and manual planning and operations. But the modern IT infrastructure is too large, complex and sensitive to rely on manual intervention.
This has added a new imperative to infrastructure management. Workloads must be moved around and optimised according to demand – and this must be done dynamically, in real time.
Estate management
None of technologies and practices involved in building a flexible infrastructure are new – consolidation, virtualisation, integration and automation are well-understood if difficult processes, and each is supported by a range of software tools that are enjoying booming sales. Nevertheless, analysts such as Gartner say that most business are some way from putting in place the dynamic, real time infrastructure that will be required.
The first stage involves asset and configuration management – understanding what is in place, what it does, how it is configured. Effective asset and service management is necessary for automated, dynamic management and for effective virtualisation.
Since the late 1990s, the rapid and uncontrolled proliferation of servers and applications has resulted in over capacity, poor utilisation, technical incompatibilities, overcrowded data centres and architectural and maintenance problems. Most organisations have a poor understanding of what they own and how it is used. The emergence ITIL (IT Infrastructure Library), helped by the release of the new Version 3.0 in 2007, is helping to improve this. Those organisations with a good knowledge of what they have are, unsurprisingly, better able to respond to both short-term problems and strategic change.
Once this is place, the second stage is usually consolidation – eliminating unnecessary servers and consolidating smaller, older servers into more powerful, modern ones. Standardising on one server or application emerged as the most effective strategy in the Effective IT 2008 survey.
If SOA was the hottest term in IT in 2006, then virtualisation was the buzzword of 2007 and, likely to remain so in 2008. Certainly, the stellar IPO of virtualisation software supplier VMWare, its continuing financial performance, and the purchase of leading open source software supplier XenSource by Citrix all demonstrate the huge interest in virtualisation among enterprise IT – and research indicates the market is in its infancy.
At its most simple, a thin layer of software is installed between a computer systems’ raw hardware resources and the systems software ‘stacked’ above – starting with the operating system. The power of virtualisation results from its ability to make optimal use of the underlying hardware resources by ‘activating’ multiple virtual machines on a single machine. Using the same technique, it can be used to treat dozens of computers like one big machine.
Virtualisation has repeatedly emerged as one of the most powerful technologies in modern IT. Because the hypervisor masks the underlying hardware from the operating software above, it also makes the deployment and reconfiguration of applications far easier – dramatically reducing the time and complexity of deploying new services.
Although not all applications are suitable for virtualisation (high I/O applications, for example, may require dedicated hardware), virtualisation is now a vital prerequisite for the dynamic, responsive architecture. Apart from the improved speed of software and hardware management, businesses can also benefit from much higher levels of server utilisation (up from perhaps 20% to 50% or higher), and a better approach to business continuity (services are spread across multiple devices, reducing the impact of device failure).
Another important area in the RTI is the use of automated operations management software – which at its most advanced is sometimes called “autonomic management”.
The goal of operations management, especially as it is now promoted by IBM, Hewlett-Packard (with its 2007 Opsware acquisition), Computer Associates, and increasingly, networking equipment giant Cisco, is to enable complex, distributed IT operations to be largely self-managing. This means that they are continually monitored, problems are identified in advance, analysed, and where possible, resolved without human intervention. Where intervention is required, it is handled routinely, without downtime or staff overtime.
Software based on these tools, and on the built-in tools provided with the latest blade server and data centre hardware, adds a further dimension: services can be billed, based on use of software, applications, storage and electric power.
The ultimate goal of the service-oriented infrastructure is both agility and scalability. New services can be introduced easily, while the underlying hardware can be easily repurposed or scaled up and down at short notice without a planning and budgeting crisis.
The business advantages that dedicated investment in these technologies are, as yet, at an early stage. However, a glimpse of the emerging opportunities can be seen in the new services being introduced in 2007 and 2008 by suppliers such as IBM (Blue Cloud), Amazon (Elastic Computer Cloud or EC2) and others including Saavis, BT, and Google.
These so-called utility computing suppliers are offering computing on demand to both large customers, and smaller suppliers wishing to develop applications for commercial resale. The emergence of true, grid-like computing-on-demand services is likely be to be one of the big talking points of 2008, but it has only been made possible by the creation of the first, industrial scale real time infrastructures.
Further reading
Back to the Effective IT 2008 Report contents page
The new virtual platform The virtualisation revolution is only just starting. Expect the most radical benefits to appear at the processor level
The virtualisation challenge How virtualisation is redefining the economics of IT
Hypervision Virtualisation technology threatens to usurp the classic role of the operating system – and reshape the industry’s competitive landscape in the process
Find more stories in the Systems Management Briefing Room