The storage market is going through yet another one of its cyclical changes and new vendors are emerging as challengers to the big boys, with the top three legacy storage vendors losing market share to new entrants and smaller, longer-established vendors. Customers and resellers now have far more choice when choosing the best tool for the job.
An annoying problem arises when new entrants with loud voices make bold, sometimes ludicrous, claims. Take the line being touted by one of these new(ish) vendors, 'Hard disks are dead. The future is all-flash'. At the fear of sounding like a scratched record, this couldn’t be further from the truth.
> See also: The top three myths about flash storage debunked
Flash is a fantastic tool that delivers lower latency, improved I/O rates, and has lowered the cost barrier for many high-performance applications. Will it replace hard disk drives in the short to medium term? Not at all. It’s like the point being made ten years ago that tape is dead. Its role might change slightly but the traditional hard disk drive is here to stay for a long time yet.
What is interesting is how many arrays are being designed for hard disk drives OR for flash. There are some in-between approaches using tricks such as a flash cache to compensate for slow SATA drives, but the vast majority of vendors make customers choose between a slow and a fast platform – a choice that needs to be made up front.
A good analogy would be the choice you have to make when purchasing a car – do you go for the high-performance two-seater or get the station wagon with four-door functionality and flexibility. It all depends on your priorities – are you after speed or practicalities? Well yes, there’s the Porsche Cayenne but then you’re paying a premium for the blend of performance and convenience.
What would be ideal is something more adaptable – that moves from large, practical vehicle into a high-performance speedster, but at a sensible price.
Being able to fill a round hole with a round peg and a square hole with a square peg is important. Filling a round hole with a square peg by using a hammer isn’t the most efficient approach.
The second point about storage being adaptive is investment protection. Currently, many applications are best served by storage which is oriented towards features rather than performance and simplicity.
However, with the slowly increasing maturity of Software Defined Storage, this core approach is extremely likely to switch. The question stands, should you deploy a feature-rich array today and then buy again in say 12-24 months’ time? Obviously, not many CFOs would support that approach.
According to IDC, sales of high-end storage fell for four consecutive quarters to Q2 2014. IBM, EMC and NetApp revenues were all down and HP managed a measly 0.4% increase in sales in Q2 2014. At the same time, sales by newer, smaller companies in the storage market increased by 11.1% to give them a market share of 21%.
This is the other part of 'adaptive' that is important – the ability to interchange between both worlds when you’re ready. If you do decide that the feature-rich SAN is the way to go today then it’s wise to look at platforms that can adapt and interchange to SDS when you’re ready.
> See also: How businesses are optimising flash storage
Far too many vendors are using off-the-shelf commodity hardware these days and putting all of their investment into the software layer only. This means that the underlying hardware is just dumb disk that won’t give users the reliability and performance needed for their software-defined world.
Investing in a platform that is designed to bridge the two approaches could enable you to have a much softer conversation with the aforementioned CFO – something that makes life easier for everyone (oh, and that pay rise conversation is a little easier!)