The revolutionary impact of virtualisation on business computing has only just begun. To date, virtualisation projects have been concentrated in the area of server consolidation/optimisation and storage networking.
But there are other areas where virtualisation can play an equally revolutionary role, not least of all in the area of disaster recovery. Already, companies like PlateSpin (now a division of Novell) and Double-Take Software offer back-up and recovery appliances that use virtualisation to reduce the amount of hardware required to provide a redundant, emergency IT infrastructure.
Now, a disaster recovery service provider targeting the mid-market says that its virtual platform can make IT safety precautions even more affordable.
Plan B provides its customers with an appliance that takes a ‘snapshot’ of their critical IT infrastructure, updated and tested daily. Automated testing is a critical component of the company’s offering, explains operations director Tim Dunger, as it is just the kind of mundane task that can fall by the wayside at an internal IT department.
In the case of emergency, a virtualised duplicate of the customer’s system is quickly set up in Plan B’s data centre that is accessible remotely.
Dunger acknowledges that there are some shortcomings to the model. “We work on the basis that not all our customers will have a problem at the same time,” he admits, although the systems of customers located near to each other are always backed-up in separate partitions of its data centre.
But while that may be unsuitable for the needs of the largest of companies, judicious use of virtualisation has enabled Plan B to cut the cost of disaster recovery: its service starts at £199 a month.
That makes it available to a new market. “We saw a need in the midmarket from companies that recognise their business is going to go under if their IT systems go down, but find it difficult to budget for a duplicate infrastructure.”
That said, there is still some room for attitudes in the mid-market to mature, he adds. “Companies tend to think they need to plan for biblical disasters,” he explains, “but far more common are things like human error, hardware failure, or a hacker or disgruntled employee deliberately sabotaging their IT systems.”