The data centre is currently under threat from being swamped by a deluge of data, warns John Kelley, CEO of storage switch maker McData. But the next two years look set to provide a testing ground for technologies that will alter storage capabilities, he adds.
New technologies such as voice-over-IP (VoIP) telephony and radio frequency identification (RFID) tags are going to add to the vast volumes of data being created. At the same time, organisations are increasingly taking advantage of rich-content media, such as video messaging files, so not only is more data moving around the enterprise, the size of the files is increasing.
Furthermore, legislators across the globe are increasing the pressure on businesses to be accountable for the data they generate and keep. This is putting an unbearable strain on storage networks, notes Kelley. The decision to simply buy more storage to meet demand is no longer an option: "IT budgets aren't matching the requirements being placed on them. That means the pressure is building on the IT people to find solutions," he says.
One technology solution that has been pushed as a panacea to the problems besetting the data centre is storage virtualisation. "We're at a tipping point. Throughout 2006-07, if we can get it right, this is really going to take off," says Kelley.
The ultimate goal of storage virtualisation is to provide a software layer that sits on top of storage attached networks (SANs) ensuring that the storage devices appear to a connected application to be a single, expandable machine.
|
||
This can help drive up the utilisation rates of individual devices on the SAN, ensuring that better use is made of storage investments. "CIOs are looking for admin benefits, centralised management controls and, importantly, an understanding that they are not over-buying. And they are also looking for a return on investment within a year. To deliver that is a challenge for the industry," says Kelley.
Storage virtualisation will also dramatically reduce the costs associated with remote data centres, through improving communications between sites. "When you look at communications costs as a total proportion of data centre costs, you quickly realise that by improving bandwidth utilisation you can make massive savings," says Kelley.
But Kelley recognises that virtualisation is a difficult concept to deliver: "It's been a vision ever since I joined McData in 2001. But it is extremely complicated to engineer, it requires carrying out multiple complex operations at high speed, and there is a challenge to make that simple to manage.
"But unless it is simple to use, businesses will not be interested," he adds. Storage virtualisation is further complicated by the lack of interoperability between different vendors' storage devices and networks.
In some ways, the idea of virtualisation is anathema to storage vendors because it would result in them selling fewer products, says Kelley. Their response has been to use virtualisation as a means of ‘stealing' data from other arrays, he adds. "It becomes a means of migrating other people's cookies into their own domains."
But despite the challenge of introducing storage virtualisation, the payback is too great to ignore, says Kelley.
"Virtualisation has been held back by the technical complexities, but it is beginning to show signs of matching its initial promise," he says. "Once people start seeing the benefits, if early adopters show it is reliable, we will see it being adopted really quickly."
And although the debate over where the virtualisation component should sit will continue for some time yet, users should not fear making the wrong choice at this stage, says Kelley. "It's likely that virtualisation modules in future will be pretty agnostic. I think the industry is coming round to the belief that the network should operate without too much interference from proprietary standards.