Experts have hailed the arrival of the “age of artificial intelligence” (AI) for years, if not decades. But what was sold to us as AI often was often more hype than reality.
Rather than smart machines learning from information that was fed to them, they were instructionally coded to respond independently to certain events.
Think of it as the machine equivalent of rote memorisation of answers by humans, without a grasp of the underlying logic.
Over the past year, the AI landscape has begun to shift dramatically, with meaningful strides being made across its three key underlying elements: massive computing capability, algorithms, and access to data.
In fact, 2016 may well go down in history as the year that AI finally came of age.
Companies such as Google, Intel, Microsoft, Salesforce, and Samsung have all made substantial acquisitions in the space, and the buzz surrounding AI has never been greater.
>See also: AI: the possibilities and the threats posed
But amid all of the talk about algorithms and easy access to large-scale computing, one element seems to have received scant attention: the critical role that large data sets play in making AI valuable to the organisation.
In fact, one could argue that data is the most critical part of the AI equation, because without access to the information that allows the AI engine to learn, grow, process transactions, and handle exceptions, businesses are left with nothing more than a lot of impressive technology that still requires substantial human intervention.
Trying to extract value from AI without access to data is like trying to make ice without water.
Think for a moment about the massive amount of data flowing through corporate networks, governments, and other institutions – everything from invoices to credit card transactions to insurance claims to stock trades to airline reservations.
The last ten years have seen an intense focus on the ability of organisations to extract value from specific data points, whether through pooling, sharing or analysis, to make better, more informed decisions.
In this respect, the value of “big data” as a source of business insight is well understood.
What has been less well understood is the value that data holds as unrefined grist for the AI mill, and the critical role it can play in helping machines understand correlations and, ultimately, take actions based on those correlations.
>See also: AI and automation will be far more significant than Brexit
The more aggregate data that can be fed to the AI engine, the smarter the AI engine will become.
The smarter the AI engine becomes, the less human intervention required. The less human invention required, the closer society gets to realising the true promise of AI.
In this new era, organisations that have access to large pools of data – both large, global corporations and the business process management (BPM) service providers with whom they’ve partnered – are holding an enormously powerful asset.
In fact, BPM companies may be at an even greater advantage, as they have access not only to individual company data, but can also leverage broader, aggregate data across numerous organisations and industries.
Importantly, when discussing this data possessing AI value, the size or nature of any specific transaction does not matter.
For the purposes of AI, such factors such as the amount of a transaction or the customer identity are largely irrelevant.
The real value lies in the aggregation of the data to test, refine, and re-test the AI engine, allowing it to perform increasingly sophisticated feats of supervised learning and unsupervised learning.
At risk of oversimplification, in supervised instances, the machine is learning by example, or by referencing “labeled” data, and the traits that consistently define it.
For example, AI may learn what a home is after by being taught to recognise the common traits all homes have (a door, windows, roof, etc.).
>See also: AI: the greatest threat in human history?
With unsupervised learning, the machine learns by recognition of differences between data placed in defined categories.
For example, AI may learn what makes one home a Cape Cod style, and not a colonial, ranch, or Victorian, by recognising the traits Capes share that the other house styles do not.
As AI expands its reach, delivers greater value, and continues to impact more and more areas of our life, the demand for vast amounts of aggregate “learning” data – and that data’s value – will only increase.
There are vast amounts of enterprise data in various organisational silos as well as public domain data sources; making connections between these data sets enables a holistic view of a complex problem, from which new AI-driven insights can be identified.
Organisations that think strategically now about how to position themselves as providers of data—AI’s raw material— will be well positioned to benefit in this new era.
Sourced by Sanjay Srivastava, senior vice president and chief digital officer at Genpact