Meaningful artificial intelligence (AI) deployments are just beginning to take place, according to Gartner, Inc. Gartner’s 2018 CIO Agenda Survey shows that 4% of CIOs have implemented AI, while a further 46% have developed plans to do so.
“Despite huge levels of interest in AI technologies, current implementations remain at quite low levels,” said Whit Andrews, research vice president and distinguished analyst at Gartner. “However, there is potential for strong growth as CIOs begin piloting AI programmes through a combination of buy, build and outsource efforts.”
>See also: More than 3,000 CIOs confirm their changing role – Gartner
As with most emerging or unfamiliar technologies, early adopters are facing many obstacles to the progress of AI in their organisations. Gartner analysts have identified the following four lessons that have emerged from these early AI projects.
1. Aim low at first
“Don’t fall into the trap of primarily seeking hard outcomes, such as direct financial gains, with AI projects,” said Andrews. “In general, it’s best to start AI projects with a small scope and aim for ‘soft’ outcomes, such as process improvements, customer satisfaction or financial benchmarking.”
Expect AI projects to produce, at best, lessons that will help with subsequent, larger experiments, pilots and implementations. In some organisations, a financial target will be a requirement to start the project.
“In this situation, set the target as low as possible,” said Andrews. “Think of targets in the thousands or tens of thousands of dollars, understand what you’re trying to accomplish on a small scale, and only then pursue more-dramatic benefits.”
2. Focus on augmenting people, not replacing them
Big technological advances are often historically associated with a reduction in staff headcount. Reducing labour costs is attractive to business executives, but it is likely to create resistance from those whose jobs appear to be at risk.
>See also: Technology predictions for 2018 – the CIO will lead the way
In pursuing this way of thinking, organisations can miss out on real opportunities to use the technology effectively. “We advise our clients that the most transformational benefits of AI in the near term will arise from using it to enable employees to pursue higher-value activities,” added Andrews.
Gartner predicts that by 2020, 20% of organisations will dedicate workers to monitoring and guiding neural networks.
“Leave behind notions of vast teams of infinitely duplicable ‘smart agents’ able to execute tasks just like humans,” said Andrews. “It will be far more productive to engage with workers on the front line. Get them excited and engaged with the idea that AI-powered decision support can enhance and elevate the work they do every day.”
3. Plan for knowledge transfer
Conversations with Gartner clients reveal that most organisations aren’t well-prepared for implementing AI. Specifically, they lack internal skills in data science and plan to rely, to a high degree, on external providers to fill the gap. 53% of organisations in the CIO survey rated their own ability to mine and exploit data as “limited” — the lowest level.
Gartner predicts that through 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them.
“Data is the fuel for AI, so organisations need to prepare now to store and manage even larger amounts of data for AI initiatives,” said Jim Hare, research vice president at Gartner.
>See also: How CIOs can act as a service broker to the line of business
“Relying mostly on external suppliers for these skills is not an ideal long-term solution. Therefore, ensure that early AI projects help transfer knowledge from external experts to your employees, and build up your organisation’s in-house capabilities before moving on to large-scale projects.”
4. Choose transparent AI solutions
AI projects will often involve software or systems from external service providers. It’s important that some insight into how decisions are reached is built into any service agreement.
“Whether an AI system produces the right answer is not the only concern,” said Andrews. “Executives need to understand why it is effective, and offer insights into its reasoning when it’s not.”
Although it may not always be possible to explain all the details of an advanced analytical model, such as a deep neural network, it’s important to at least offer some kind of visualisation of the potential choices. In fact, in situations where decisions are subject to regulation and auditing, it may be a legal requirement to provide this kind of transparency.