Hyperconvergence or how to optimally manage secondary data

What it means to hyperconverge

In the last five years, the number of mentions of and searches for hyperconvergence in the media and on search engines has sky-rocketed, driven by increased adoption rates and the controversy over its true meaning.

Most recently, interest reached a new peak with the £3.9 billion IPO of Nutanix, an early pioneer in hyperconvergence, symbolising the trend around the concept.

Hyperconvergence is commonly defined as a type of infrastructure system with a software-centric architecture that tightly integrates compute, storage, networking and virtualisation resources and other technologies from scratch in a commodity hardware box supported by a single vendor.

A smartphone is a perfect example of hyperconvergence in the consumer market. It consolidates a phone, a music player, a camera, satellite navigation, a calculator and even a flashlight into a single device.

The need for the user to setup and administer what were previously individual devices is no longer there. New functions use the underlying converged system and by definition are software-defined.

Hyperconvergence is very simply, the consolidation of distinct silos of data centre infrastructure into a single operating environment. It eliminates the physical separation between compute, network and storage, that restricts performance, efficiency, and ease-of-use.

>See also: New Year’s resolutions for storage 2017

By extension, it also breaks down independent silos built to run specific application workloads within the data centre. The concept of hyper-integrated building blocks separates convergence from hyperconvergence.

Convergence simply co-locates hardware elements with a layered administration tool, in contrast hyperconvergence tightly locks together compute, network and storage elements into high-performing, efficient systems.

Today, secondary workloads typically account between 60 and 80 percent of the total infrastructure within a modern data centre. But they are highly fractured, fragmented and inefficient because of the big volumes secondary data represents and the time it takes to store and access.

This translates into slow performance, and outages. It’s a big unaddressed area in most organisations, ripe for consolidation. Applying the principle of hyperconvergence, enables these secondary workload silos to be combined into a single, intelligent, web-scale platform.

Now, CIOs can reap the benefits of global data reduction, copy data management, in-place search and analytics across a merged infrastructure. The efficiency gains are enormous, in terms of time, human cost and operations.

As hyperconvergence is quickly moving beyond its original use for primary data centre workloads, and positively affecting secondary data management, it will begin to change areas such as data protection and big data workloads.

Achieve simplicity and efficiency

It’s these secondary workloads on which this article is concentrating: applying hyperconvergence principles to secondary storage, is a crucial step towards achieving simplicity in data management.

A single repository and UI to manage secondary storage workloads gives administrators a complete picture of their data assets. This helps uncover new areas for efficiency: reducing storage footprint and infrastructure costs, deduplication of unnecessary data, less management overhead, and efficient re-use.

Think about it, if a standard enterprise-grade organisation copies data ten to twelve times across individual storage appliances for secondary storage use cases, then consider the efficiency gains if a single copy could span all use cases?

>See also: Heterogeneous predictive analytics: solving data management

Besides reducing the physical, logistical and energy demands of an organisation, hyperconverged secondary storage systems simplify and reduce the growing burden placed on storage admin.

Each of the individual secondary data silos in an enterprise demand administrative expertise to manage them. With hyperconvergence, you can free up dedicated storage professionals to focus on providing new value to their IT departments.

Many IT departments might not even think to look to the long-overlooked secondary data categories like backups, test/dev, analytics and file services of their data centre, but the cost savings are hiding in plain sight, ready to be hyperconverged.

A primary achievement realised by implementing hyperconvergence will be the elimination of the need to administer independent silos of infrastructure across all secondary workloads.

Misconceptions around hyperconvergence

As straightforward and intuitive as hyperconvergence may seem, a few misperceptions still persist. The first is the idea that hyperconvergence requires a hypervisor.

In reality, there’s no architectural reason preventing hyperconverged infrastructures from running both virtual and physical workloads simultaneously.

Not every hyperconvergence vendor chooses to support multiple modes today, and this is only a design choice. Data centre administrators clearly gain greater flexibility if they can choose to run alternate hypervisors and/or operating systems on the underlying infrastructure.

They can select whether hyperconverged nodes will run a conventional operating system (OS), a hypervisor or an OS with containers. Hyperconvergence and virtualisation both collaborate to simplify the job of managing and allocating IT infrastructure, but they are actually distinct concepts.

>See also: 5 factors driving digital transformation

A second misconception centres on the belief that hyperconvergence only applies to business-critical workloads.

It’s true that hyperconvergence applies to primary storage and servers, but its value also extends into the arena of non-mission-critical secondary workloads, such as backup, file shares, app development, object stores and analytics.

What became apparent to me several years ago was the potential of hyperconvergence for whipping secondary storage workloads into shape.

Consolidating large volumes of data, eliminating independent infrastructure silos and unlocking efficiencies is only the beginning. Once the data is consolidated, applying big data analytics and DevOps represents an even greater opportunity to generate business value and revenue from your massive data assets.

 

Sourced by Mohit Aron, founder and CEO, Cohesity

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

Hyperconvergence