One of the delegates at the recent Information Age roundtable debate had a surprising revelation to share: “If you look at some of our internal reports, 85% of our revenues comes from agriculture.” What made this information so surprising is that the company he represents is a global IT consultancy.
The anecdote illustrates not some innovative, money-spinning marriage of farming and technology, but problems organisations can encounter with bad data. As it transpires, agriculture simply comes first in the company’s list of industry sectors; when consultants file their reports, it is the easiest box to check, skewing any hope of getting any sensible picture of revenue generation.
Of course, it is not just indolent form fillers that cause problems. Another delegate described how years of careful consideration about how his organisation defined records was undone by a single acquisition. In this case, his company had agreed to create a specific link between a customer and a location; as a result of the acquisition, though, it suddenly had a whole new set of itinerant customers, and no clear, consistent way of defining them. Being able to identify and bill customers accurately is “pretty fundamental to our business”, the delegate acknowledged. “But it turned out to be hard.”
The proliferation of IT through the organisation’s databases and business
applications has created a glut of information. Often the applications can share components of information, but use different data models; often data will be input in contradictory ways.
One of the more elegant approaches for tackling this distributed data problem is master data management (MDM). In this model, the multiple information silos, with disparate data models are fed from a central hub, where the master reference data is kept, ensuring that any changes are harmonised across multiple databases.
Such an approach is seen as increasingly necessary as business leaders introduce the concepts of service-oriented architecture, where the process of automating a set of business processes can entail integrating components from numerous applications, each, potentially, with different data models. Having a master set of data becomes an essential piece of the infrastructure.
But how ready are today’s organisations for master data management? Some, as in the case of the aforementioned IT consultancy, senior executives have not only bought into the concept but have already implemented plans. In this case, the company interviewed several interest groups to define user requirements, before building logical data models to support this.
That process of modelling data requirements can be best understood by considering something such as employee attributes, the delegate explained. An HR system may store something in the region of 200 attributes about a single employee, from name and date of birth, to qualifications and salary scale. “Perhaps only 20 to 30 of those attributes are ‘interesting’ to another system. So we asked business people to identify the things that were interesting to them, and only used those attributes in building our data models.”
But for other organisations, getting agreement about data models can be far more problematic. For organisations that consist of several semi-autonomous units, there may be no incentive to work together to build common data models, opined a senior information officer working in the public sector.
Indeed, central control of data can also have its downside, reported the UK IT director for a multinational luxury goods producer. He described how the sale of one unit resulted in tortuous discussions over who owned the data.
“We had gone through cost reduction programmes that saw us centralise a lot of the data.” Whoever decided to sell the business unit certainly “never came to me to discuss the cost of separating the infrastructure as part of setting the price”, he added
The cost of managing data remains a hurdle to improving data management, according to one delegate: “The theory of master data is fine. But the reality is that unless you can commit to not changing, this is going to have to be an on-going project, and getting the funding for that is not always straight forward.”
But despite difficulties and challenges in imposing master data management strategies, the delegates were universal in recognising the need to control data.
As the information architect at a large multinational bank explained: “We’ve probably got a few copies of every bit of software ever made somewhere in the business. That means data consistency is a priority for us, and we’ve got several large programmes looking at it.”