Although flash can offer jaw-dropping performance, it’s not all about speed – and some workloads just aren’t cost-effective for this storage medium.
On paper, flash is certainly head and shoulders above the competition. With performance of over a million IOPs and capacity that can scale to over a PB – not to mention the technology’s low footprint and power consumption – it’s no wonder flash sales have rocketed over the last few years.
In fact, it’s anticipated that nearly all data centres will incorporate flash of some kind by the end of the decade.
There are a number of reasons why. The first, and most obvious, is the huge growth in data experienced by businesses over the last decade. Capacity has a major influence on storage purchasing decisions.
>See also: The three things you should know before choosing a flash solution
Flash promises to handle enormous volumes of data with ease, so who wouldn’t be tempted by that promise? Especially when organisations are led to believe that HDDs and other traditional storage media are already bursting at the seams.
However, tape, HDDs and others are not old and creaky. And the competition from flash is actually accelerating developments in more traditional solutions, so they’re not the same products organisations used even five years ago.
But the demand for real-time data is a perfect launch pad for flash; it offers the opportunity for immediate insight into customer behaviour, analysis of business performance, and details how the business fits in to the rest of the market.
Beyond performance, other reasons for choosing flash include the shift towards virtualisation, which is part of a bigger trend around flexible application deployment. Most organisations know the benefits of this type of deployment model, but its storage demands are pretty gruelling.
Flash positions itself as the number one choice for virtualisation in terms of performance capabilities, and this positioning has resonated with a lot of potential customers, with many now actually planning for all-flash infrastructures.
This represents a change in direction for flash – away from niche, small-scale single application deployment and towards mainstream multi-application shared storage.
The technology is changing too. Developers are anticipating pinch points and working on even faster and less power-hungry solutions.
Don’t underestimate the fear of being left behind either. Organisations are constantly reminded to future-proof their IT systems, and storage is no exception.
Flash is touted as tomorrow’s storage, so where does that leave traditional technology? No business wants its storage to be obsolete.
So it’s clear why flash is so popular. But there are some issues too, and this is why flash isn’t always the right choice.
That’s because every data centre is different. Even data centres in similar organisations working in the same markets have to be built to support specific applications, user access and response time requirements.
No one storage vendor can develop a single product that is best for every application workload. ‘Bespoke’ is an easy word to throw around but, in this case, it’s true.
A suit tailor-made in Saville Row will be a better fit than one off the rack, and a storage solution will work better if it’s designed specifically for an organisation’s needs.
Of course, one of the problems most organisations have is not really knowing what their real application workload performance needs are.
Flash would have IT pros believe it relieves all storage performance problems, but determining which applications justify the need for flash and exactly how much flash to deploy are fundamental questions.
If flash is not provisioned correctly and tested against real-life applications, the price per GB of flash storage can cost up to ten times that of traditional spinning media.
Then there are flash vendors’ claims on speed, which can be very misleading. Results vary enormously depending on the applications and their workload I/O characteristics.
To make flash affordable, deduplication and compression are vital. But enabling such features can have a dramatic impact on performance. And it makes workload modelling even more complicated.
Accurate workload modelling for flash needs to emulate the user’s workload, control the duplicability of the data content, control the compressibility of the data content, and generate millions of IOPS under a variety of loading assumptions.
Before deploying flash storage, or any storage system for that matter, IT architects need to be sure of storage performance ceilings, and to understand under which scenarios they will be reached. Without that information, it’s impossible to evaluate which technology best meets their needs.
>See also: High on the agenda: the future of flash tiering in enterprise
Until now, this planning has relied on guesswork – both about current workload requirements and around future demands. But without truly understanding their workload requirements, how can IT architects possibly know whether flash is going to work for them?
Storage workload performance analytics collect intelligence on the unique characteristics of an organisation’s application workloads. Once all the workload data from both historical and real-time production storage systems are gathered, highly accurate workload models can be generated – enabling application and storage infrastructure managers to evaluate and stress test storage product o?erings using their speci?c workloads.
This new generation of storage performance validation tools can be so accurate that their simulations prove to be virtually identical to actual production workloads.
And really that’s how organisations will know if flash is right for their data centre. Once they’ve checked whether it meets each of their application workload needs, weighed up the pros and cons, and run it against their budget, they’ll know whether the hype around flash is true for them. In many cases, it very well could be.
Sourced from Len Rosenthal, Load DynamiX