What have been HGST’s biggest growth areas in the past few years?
Going back in time, the strength of HGST was enterprise class hard disk drives. We’re in a period of time where we’re now understanding how to deliver the enterprise storage that allowed us to be successful, but more in line with the changes happening architecturally. We want to develop value-added building blocks to ship and aggregate the basic device into more software-based products that we can sell to cloud service providers as well as direct to enterprise.
We’ve made a number of acquisitions in the flash area, acquiring technology and talent through acquisitions over the last few years. They’ve allowed us to accrue control room technology as well as lower levels of software that allow flash to perform well with enterprise apps. In 2014 we acquired Skyera, because we are interested in their ability to pack NAND flash high performance reliability at rack level unit.
> See also: Why the hard disk isn’t a write-off yet
The most recent product in this category is the ‘Active Archive’ piece. Enterprise data centre systems are very much hardware-centric, cloud being built on hardware, but increasing levels of value are coming from software. We set out to develop systems optimised for scale-out cloud centre environments.
60-80% of the value from data centre racks is coming from storage, so by developing storage systems we can provide value in that we are a vertically integrated, captive source of the fundamental storage building blocks.
But we also have what we call a ‘vertical innovation’ approach. Because we have expertise in the storage devices and underlying technology we can extract more value from those devices by tightly integrating them with the hardware, and the hardware with the software.
There’s still a huge amount of talk around software-defined storage, but a lot of it is being branded as ‘marketing hype.’ What’s your definition of a software-defined data centre?
There’s a fundamental shift we have seen with the enterprise data centre vs. the cloud data centre. As you enter the realm where you need vast amounts of scalability and elasticity to manage compute capabilities as well as accessibility of data independently, you move into this world of the software-defined data centre.
Ultimately, even though we are creating elastic pools or leaks or storage, they need to be accessed by applications that have certain needs and diverse concurrent apps that have needs. So it’s important to us to provide scalability, resiliency, and access to systems but also integrate seamlessly with the systems that need to access and process that data.
Higher performance processing of data, like flash-based products, are providing the ability to access data very quickly, not just flash as a media but connecting flash in a way that speeds its access to an app or CPU running the app. The trend is to move away from traditional connectivity to connectivity of flash devices.
The limitation there is that servers only have so many PCI connections within them. The density of PCU storage devices is limited, and in that sense the environment is really focused on creating pools of storage, being able to allocate that resource as a high speed, low latency resource for multiple apps.
As you’re moving away from your traditional storage device offering into more of a systems footprint, how is that going to play out with your existing customers in enterprise?
Big changes are happening, and all the participants in the IT ecosystem are reestablishing ourselves. Our customers have a choice to make – they can either invest one dollar in hardware or one into software, and they see more value creation and opportunity in software as a service. So if we can deliver a competitive high value system platform that allows them to adjust their business model and focus on creating more value, we can do this in a very compatible and favourable way with our existing customers.
Our customers want to focus more on the software and services that drive customer loyalty, and their desire to invest in hardware and system-level capacities is diminishing, so they’re looking to us for that opportunity to provide that for them so they can focus their energy and investments on areas that help drive their businesses and customers’ businesses the most.
HGST has recently been refocusing its product strategy towards large cloud service providers as well as its traditional enterprise customer base. How have you done this?
Cloud providers typically developed their own whitebox infrastructure or write their own, but they don’t get rewarded for building infrastructure, they get rewarded for services that solve customer business problems.
We recently released the world’s only helium-filled hard drive, which gives it the characteristics of being high capacity, low power consumption and reliable, so they’re perfect for the high density scale-out systems that cloud providers need. But what we found is that those devices were not really being leveraged to their full extent in the marketplace.
Companies building storage enclosures to house those devices had to design them with the assumption of putting any device into the enclosure, so some had high power, high heat dissipation, while others were lower power. As a consequence they were limited in the number of devices they could hold, and the density. So we created a custom enclosure that took full advantage of that, providing the highest capacity per rack, and adding the system software needed for cloud data centres.
There’s a common notion in the cloud industry that building a system from whitebox hardware and adding software on top is the cheapest way to build out a data centre, but in fact, because of the vertical integration approach we’ve taken, we’re able to extract more value from hardware and software, which is a fundamental shift in value provided to the cloud data centre.
> See also: Software-defined storage is driving data centre infrastructure innovation
Achieving those new thresholds of value enables us to capture data, store data that could previously not be stored affordably, and create a layer of storage that sits between tape and traditional disk-based storage to hold data in a much more affordable way past its create and modify stage.
Looking ahead, how do you see these new storage innovations helping accelerate the Internet of Things?
When we talk about the transition of the IT industry going from ‘second’ to ‘third’ platform, as the analysts like to call it, this will encompass cloud computing, big data analytics as well as the Internet of Things. The ability to aggregate all that data and make it available for analysis of longer-term trends and correlations that might appear disparate today but will be more meaningful in the future, is key.
The pace of data creation continues to grow exponentially, doubling every two years. To compound that, the proportion of data deemed to be valuable is also increasing. What we find is that there is a growing gap between the data created that’s valuable enough to store, and the capacity of storage. Nobody’s budgets are going up significantly so they can only afford to deploy so much storage. So we are looking to fill this gap with a new category of storage system.
The ‘data lakes’ of unstructured, increasingly machine-generated data arising from the IoT needs an affordable place to live. Without innovation around the storage solutions out there and the creation of new categories of storage, it’s going to be unaffordable and we lose that value. So it’s a virtuous cycle to an extent if we can innovate around performance. The ‘cloudification’ of data is vital to aggregating it all together from different places, which is important to gathering meaning from the data, turning that data into knowledge and ultimately into wisdom.