While agility has become the watch word in modern business, achieving it isn’t straightforward. Many large enterprises remain entangled in cumbersome, centralised management hierarchies with decision-making power still held by relatively few.
A hangover from a bygone era, when decisions were made at the top and filtered down gradually via old communication methods. Today, such an approach seems grindingly slow. It’s this lack of agility that so often impedes innovation and responsiveness, letting competitors with more flexible ways of working streak ahead.
Although digitalisation has made access to data much easier, which supports devolved decision-making, organisations have resisted empowering teams for fear of losing control. It’s a problem made harder by the vast amounts of customer and personal information collected over many years now residing in multiple data repositories. Simply maintaining these mountains of data securely is an expensive headache, let alone releasing its pent-up value within a workable governance structure. Not forgetting too, the gamut of external compliance, privacy and security regulations that also need to be followed when manipulating and utilising sensitive data.
Freeing up access to data
Recognition of the burgeoning requirement for a different, and more effective, way of managing and extracting value from data stores has led to the concept of ‘data mesh.’ It is a distributed, domain-driven approach, where ownership and responsibility for data is handed to the relevant teams that have a vested interest in its accuracy. The aim is to unlock the value of data assets by turning them into well-managed, high-quality data products, enabling easier and more comprehensive use by data consumers. This frees up access to data by breaking down data silos, promoting sharing and collaboration.
In theory, it sounds ideal, with each team responsible for the stewardship and accuracy of their data. But it relies on the capability to shift data management away from a centralised IT function to put control into the hands of product owners. Then, to empower those teams to analyse, dissect, and utilise data without needing endless technical support and intervention. While, at the same time, ensuring internal governance standards and external regulations are always followed, without fail.
And there’s the rub. Compliance can’t be a nice-to-have when responsibility is decentralised, or circumvented if the pressure is on to deliver a new project. It must be compulsory.
Data mesh needs guardrails
Realistically, expecting employees to remember to follow data quality and compliance guidelines is neither fair nor enforceable. Adherence must be implemented without frustrating users, and become an integral part of the project delivery process. Unlikely as this sounds, a computational governance platform can impose the necessary standards as ‘guardrails’ while also accelerating the time to market of products.
Sitting above an organisation’s existing range of data enablement and management tools, a computational governance platform ensures every project follows pre-determined policies, for quality, compliance, security, and architecture. Highly customisable standards can be set at global or local levels, whatever is required. Crucially, projects can only move into full production if all the correct criteria are met. There isn’t any option to override or ignore the rules.
Computational governance can be empowering
While this might seem restrictive, there are many benefits from having a standardised way of working. To streamline processes, intelligent automated templates help data practitioners quickly initiate new projects and search for relevant data. The platform can oversee the deployment of data products by checking their compliance and taking care of the resource provisioning, freeing the teams from the burden of coping with infrastructure technicalities (on cloud or on-prem) and certifying data product compliance at the same time, before data products enter production. These automated templates simplify life for users, dramatically reducing delivery schedules and speeding up time to market.
Users with no technical background can easily discover available data, inspect exposed data contracts, request access to data offered as output ports and even provide feedback to data product owners by publicly asking questions and requesting improvements. This means that anyone with the right user permissions to retrieve business-relevant information can start delivering value straightaway. Data that has already been productised can be reused by others. Rather than feeling limited by the system, users are empowered to discover insights and opportunities within the data, instead of wasting time on administration and re-verification tasks.
Bringing certainty to decision-making
By overseeing all data tools and technologies, in contrast to replacing them, computational governance enables a change in data management thinking and investment. It provides the necessary guardrails that are vital when implementing a data mesh. In this way, organisations can take advantage of the potential value in their data silos without the risk of creating an even bigger data mess.
Technology-agnostic platforms also ensure future-proofing so customers can bring in new tools, and structured or unstructured data, as their organisation continues to evolve. The result will always be consistent data quality standards on every project, bringing certainty and reliability to decision-making at all levels, and informing strategic planning for the future.
Andrea Novara is the engineering lead and banking & payments business unit leader at Agile Lab.
Read more
Data mesh: the next big data architectural shift – Data is at the heart of everything that we do, and that trend isn’t slowing down
Data storage problems and how to fix them – Digitising data storage can be a daunting task and some of the biggest barriers businesses face are with infrastructure, costs, security, compliance and people, says Kubair Shirazee, founder of AgiliTea
Why data isn’t the answer to everything – Splunk’s James Hodge explains the problem with using data (and AI) in helping you make key business decisions