The 3 pillars of big data analytics potential

The explosion of activity in the big data analytics sector is both undeniable and understandable.

Done well, big data can help companies realise valuable data and business insights that translate into business value.

The Royal Bank of Scotland, for example, has used big data analytics to underpin a strategy it calls “personology.”

This strategy is about delivering a more personable and personalised customer experience, using data to better understand, anticipate and service customer needs and queries.

This data-driven approach towards user experience has boosted customer loyalty and helped to improve activity and visibility across loans and insurance.

Supporting big data success

However, simply investing money in technology is not enough to deliver big data success. Many big data projects have failed and usually for the same few reasons.

In their rush to jump on the big data bandwagon, many companies ignore the three key pillars that support successful big data analytics in business:

Desirability to define the problem that the business is trying to solve.

Feasibility in identifying the tools and skills resources to address the problem.

Viability in quantifying the business value of the problem for the enterprise.

>See also: Machine learning set to unlock the power of big data

For many organisations, ensuring these pillars are not overlooked requires a clear methodology and technical approach.

A design thinking methodology, leveraging open source and techniques like AI and automation can enable organisations to mine a range of big data assets in order to identify and leverage the valuable, actionable data within both structured and unstructured data sets.

At its core, big data requires at least three things to work: gathering disparate sources, storing and gaining insights from the data.

The problems of gathering and storing data has largely been solved (though both still present challenges).

Extracting useful signals from the noise in almost real-time, however, remains difficult.

Finding useful information from data stores is difficult for a lot of reasons, but the biggest problem is more fundamental and more common problem than most organizations realise: companies have not scoped out at the start what they intend to do with the data they are gathering.

A design thinking methodology can help in this regard.

>See also: The right side of disruption: why digital transformation is the new kingmaker

The map – design thinking

Before starting on a big data journey we need to know the minimum viable problem that needs to be addressed.

For this, design thinking provides a critical method to getting to the root of a known problem, identifying an as yet unrecognised problem, or both.

It does this while staying as close to business reality as possible.

Essentially, it ensures that businesses focus on the right problems while developing viable solutions.

Discovering and defining problems is the most important part of any big data journey.

The reason is simple: design thinking helps to focus businesses on addressing customer pain points by looking for issues that aren’t always obvious.

Once the pain points that need addressing are identified, a company can define what data it needs to gather.

By focusing data on a particular issue, we can discover new actionable insights that we can execute on.

>See also: Why open source can save companies from drowning in the data lake

Design thinking requires a large culture shift for many enterprises, but offers significant benefits in return. It encourages teams to address challenges as a whole, rather than fixing individual problems in isolation.

The tools – data lakes and open source

Data lakes help to maintain data from various sources (internal, external and public) at the lowest granularity in a cost effective manner — from structured and semi-structured to unstructured — until they’re needed.

Anyone with data manipulation skills can access and use data within a lake as they need it, regardless of origin.

All data sources are accessed equally within a lake structure.

The structural flexibility of data lakes makes the incremental cost of adding new use cases marginal.

However, data lakes are not a panacea for big data issues. Additional tools are needed to channel them into something useful. That’s where open source tools come in.

Open source big data tools offer a variety of capabilities, ranging from insights and forecasting to predictive analytics and more, at an affordable cost.

These tools can also help to codify and automate operations — lessening the need for human intervention and processing at the most menial level. This frees organisations to identify better insights and make better use of the time invested in data analysis.

Solving even small problems can have a substantial bottom-line impact for a business. In one case, a 1% efficiency improvement delivered a $200 million return on investment for a large business.

The muscle – automation

Automation of data management can be transformational in the enterprise. Eliminating low-level manual processes frees up people resources, amplifying human potential to deliver more value and creativity further up the value chain.

Automation helps companies analyse data — and spot anomalies with greater accuracy and at a faster rate — in real time. Combined with machine learning, automation can grow and adapt, refining itself to deliver further efficiencies and insights.

>See also: The rise of intelligent automation in the workplace

However, automation doesn’t happen by itself.

Data sources needs to drive automation. That invariably means breaking data out of existing silos so it can be fed into an automation engine, providing the engine with a complete view of the entire relevant data pile, rather than just a segment or snapshot.

Employees, too, will need access to data in the round so they can recognise contextual bias in data capture, among other things that require a broader view.

Living up to the big data dreams

The effort is worth it.

Automation, data lakes, open source tools and design thinking all play critical parts in any big data journey.

Each component represents a fundamental part of the process of extracting maximum value from data assets.

They enable organisations to quickly identify actionable insights in real time and enable employees to build meaningful solutions to business-critical issues and opportunities.

And that’s the big data dream: finding those hard-to-spot problems, solving them and automating them to free up time for bigger things.

Related: Data Analytics Trends in 2019

Sourced by Abdul Razack  Head of Platforms at Infosys

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

Big Data