When the University of Stirling issued a tender for a new storage management infrastructure, it was surprised by the overwhelming number of responses. In almost every case, though, the cost of the proposed solutions was prohibitive.
|
||
“The problem was that our budget was not up to many of the solutions,” says Brian Bullen, Unix systems specialist at the university.
Like many organisations, the university’s IT infrastructure had reached saturation point. Back-up windows were being smashed because of data overload and the university’s Microsoft Windows NT-based file servers were reaching a capacity that could only be topped-up by purchasing more servers — thereby compounding the back-up problem.
Such challenges are common to most major organisations across the world. Quite simply, managing ever-increasing volumes of storage is eating large holes in IT budgets at a time when these are frozen or shrinking. As many are finding, solving the underlying problem requires some radical re-architecting — often by networking the storage capacity — a move that involves the kind of upfront investment that many chief financial officers baulk at.
Multipliers
For a start, today’s storage cost profile is multi-dimensional. According to analysts at research group Gartner, organisations can spend several times the initial purchase cost of their storage devices in administration costs. Gartner calculates that for every $1 spent on mainframe storage, an organisation will spend $3 on management. For Unix-based storage, it suggests that the multiple is seven times, and for Windows NT/2000, the management costs rises to fifteen times. But what lies behind such high overheads and what can be done to dramatically reduce them?
At the heart of the problem is the traditional direct attached storage (DAS) model, wherein a server interacts with its dedicated storage resources, either within the same cabinet or directly connected.
As storage volumes have proliferated, four main drawbacks with DAS have emerged, says Steve Duplessie, a senior analyst at sector consultancy the Enterprise Storage Group. First is poor utilisation of storage, as the system can only access the storage that is directly attached to it and therefore cannot share its resources with any other device that is becoming overtaxed. Second, is the problem of scaling up the available resources, and the inevitable downtime that is associated with adding new storage.
Third, says Duplessie, is reliability. “There is only one way to access the critical information on the storage device in a DAS world. If the server goes down, then there is no other means of accessing the data,” he says.
Finally, there is the question of economies of scale — in both staff and equipment. “Since talented storage personnel are scarce and budgets are tight, networked storage is the primary way to scale people. We estimate that a systems administrator could handle between five and ten times the amount of capacity under management in a networked environment compared to DAS,” says Duplessie.
Many DAS environments have grown large by accident — for example, by the development of an application that has proved more popular than first anticipated. “Organisations buy a system with a server, then find that the application and the database has grown bigger than the [internal] disk that came with the server. What do they do then? They buy a new server and they have the same problem six months later,” says Mark Ellery, business development manager at Hitachi Data Systems (HDS).
Yet if IT managers are to unlock the funds to invest in the new networked storage infrastructures that could help solve some of these problems, they will have to do more than simply quote high cost-of-management figures to a sceptical finance director.
In the current environment — and not wanting to be in a situation where the point is proved by a series of severe outages — IT managers need to formulate cast-iron return on investment (ROI) and total cost of ownership (TCO) calculations to underscore the urgency of the situation.
Audit and calculate
Before calling in vendors Krischer says it is essential storage decision-makers know where they stand. They need to conduct an internal audit that defines in detail the existing infrastructure, its associated problems and what is causing them.
The difficulty with such studies is that many of the costs associated with storage can be difficult to quantify. For example, it is easy to list the list price of disk arrays, but trickier to cost the amount of man-hours needed to manage it and its stored data.
Furthermore, Duplessie suggests that some of the storage problems that organisations are grappling with may have been generated in-house — either by ad hoc solutions implemented to solve short-term requirements or by incompetence.
The next step is to define the underlying goal. “Is it a reduction in overall storage costs? Increased availability? Better utilisation? Users have to establish the target,” says Krischer.
That requires a set of metrics — covering hardware, maintenance, administration, energy costs, floor space costs, training, co-location of data, and so on — that illustrate how well the storage infrastructure is holding up.
|
||
“They should be able to capture some sort of disk utilisation or even tape utilisation statistics, such as hours of use in a 24 hour cycle or the amount of gigabytes backed up over a certain period of time,” says Steve O’Brian, senior product manager at storage area networking device vendor McData.
Aiding that process and ensuring that they have not left anything out, organisations can use some of the online ROI calculators offered by vendors, although they should take the resulting figures with a pinch of salt, advises Duplessie.
SAN arise
But while assessing the costs of direct attached storage may be relatively straight forward, the ROI and TCO calculations for networked storage architectures are far from simple.
Currently, most mid-sized and large organisations have a hybrid topology of direct-attached, network attached storage, and in some cases, storage area networks, depending on the applications end-users are running and the legacy of their storage infrastructure.
While network attached storage (NAS) is regarded as an easy and cheap way to augment storage capacity, especially for file serving, the real means to the end rests with storage area network (SANs). By building dedicated backbone networks that treat all storage resources as a single pool of capacity, organisations can expect higher utilisations, lower cost management and greater flexibility, scalability and availability — at least that is the claim of vendors and some analysts.
One of the key benefits is a higher utilisation of storage assets, boosting the figure from under 30% in a predominantly DAS environment, to more than 70% in a networked environment. “The essential aspect is that if a server needs storage, you can allocate from the reserve. You don’t have to keep a reserve for each one of the servers,” says Krischer.
SANs are typically based on a fibre channel network or, alternatively on an Internet Protocol-based iSCSI (Internet Small Computer System Interface) network. The main differences between the two are that while fibre channel offers higher performance at a relatively high cost, iSCSI or IP-SANs offer lower cost, but are slower and still evolving technologically. In time, analysts expect to see the IP-SANs driving down costs, improving standardisation and interoperability – a sore point with many users.
|
||
At present, there are vested interests among storage vendors that resist standardisation, making the task of attaching storage platforms from multiple vendors to a single network problematic and expensive.
Indeed, storage area networking often brings with it the risk of proprietary vendor lock-in. “One of the biggest problems with SANs is the potential that vendors will use the change to lock in the customer. Some of the vendors will not misuse that. Some of them will see it as an opportunity,” says Gartner’s Krischer. With a lack of standards, interoperability is still piecemeal, and vendors often argue that customers can only ensure their SAN works efficiently if they attach devices from a single source.
For example, Josh Krischer, an enterprise storage analyst at reseach group Gartner, cites the example of a major British bank that was forced to pay five times the market price for storage arrays for its new SAN. That represented ten times the lowest price that could have been negotiated by an aggressive buyer, says Krischer.
When it comes to roll out, both analysts and vendors advise against any kind of Big Bang approach in which the whole organisation’s storage capacity is wired together. By rolling out a SAN in a piecemeal fashion, IT will be able to prove the benefits, as well as helping the organisation to spread the high cost of implementation.
|
||
SAN structures, of course, require more than just wiring. Companies need to purchase SAN switches, specialist software for management and ‘virtualisation’, and often new SAN-enabled storage arrays. But such centralisation can generate significant savings. For example, Bullen suggests that in Stirling university’s modest environment, staffing levels to support functions such as tape backup and troubleshooting have been cut in half.
Nonetheless, reliable ROI and TCO figures from users that have already implemented various networked storage strategies are not widely available, and some claims seem too good to be true. For example, McData claims a payback period of just four months for an implementation at American Electric Power, a SAN project that it says generated cost savings of 30% in the first year alone.
The reaction of analysts is mixed. Krischer is sceptical that rewards are as big as vendors suggest, particularly because of the issue of vendor lock-in. Others are more convinced. “Payback can be as soon as a day in the case of the added uptime and flexibility the SAN provides, or up to a year. But we haven’t seen any SAN implementations where the payback has taken more than a year,” says Duplessie.
Such differences of perception underline why organisations need to conduct their own in-depth studies long before they get bombarded with vendor proposals.