Transparency is a hallmark of modern business. Consumers and decision makers expect complete transparency in almost any facet of technology. The court of public opinion has been brutal to companies such as Facebook who use controversial data-mining practices but conspicuously silent on companies that use insights from AI platforms that they can’t explain. However public awareness over explainable AI is beginning to pick up speed.
Just recently, the EU announced that for AI to be ethical, it must be accountable, explainable and unbiased. However, outside of these regulations, business owners have needed little encouragement to focus on developing explainable AI solutions. Being able to explain the thought process of an AI platform offers enterprises an opportunity to develop more advanced and targeted services.
AI has become one of the most potent battlegrounds for digital enterprises to fight it out and develop new services. PwC reports that business leaders believe AI will be fundamental in the future with 72% of survey respondents saying that AI is a “business advantage.” It is no surprise then that decision makers see building explainable AI solutions as a competitive advantage.
Organisations from Nvidia to Equifax are starting to move away from the opaque and unaccountable decisions of black box AI to understand how an AI platform has come to a given conclusion. Developing insights is no longer enough; companies want to know precisely how the insight was created as well.
Will data ethics and regulation drive innovation in AI?
The end of black box AI
Black box AI is the term given to AI platforms that use machine learning to make decisions but can’t show the rationale behind them. Black box AI has delivered many successful services in its current format, but it is far from ideal. It isn’t uncommon for companies to be left scratching their heads wondering why an AI platform has made a particular decision. Explainability is the central challenge of black box AI.
On the topic of explainable AI for the Medical Domain, Andrea Holzinger discusses stated: “The problem of explainability is as old as AI and maybe the result of AI itself”. Critically, “while AI approaches demonstrate impressive practical success in many different application domains, their effectiveness is still limited by their inability to ‘explain’ their decision in an understandable way.”
In most cases, “even if we understand the underlying mathematical theories it is complicated and often impossible to get insight into the internal working of the models and to explain how and why a result was achieved.”While this is isn’t a dealbreaker for lower priority decisions it is problematic for high stakes use cases of AI where choices are a matter of life and death.
Instances where AI is used to diagnose medical conditions or direct autonomous vehicles, require complete transparency. Lack of visibility reduces the integrity of insights substantially to the point where black box AI simply isn’t fit for high stakes use cases where bad decisions can have a devastating impact.
AI ethics: Time to move beyond a list of principles
For example, running an autonomous vehicle with black box AI would essentially be taking a leap of faith with the passenger’s life and hoping that the AI had taken all the necessary variables into account. The user would have no way of scrutinising the variables used by the AI.
If AI is to reach the next level of adoption, enterprises and consumers need to be able to place their trust in it. You can’t tell if an AI platform’s logic is flawed if you can’t view the data that it used to make a decision. It wouldn’t be ethical for a company to place a machine learning platform behind the wheel without understanding its driving rationale.
The next generation of explainable AI platforms
The practical and ethical concerns around black box AI have driven many enterprises to emphasise transparency and to move away from machine learning models with poor visibility. No one is more conscious of transparency than the team at NVIDIA who are in the process of programming an autonomous driving platform called PilotNet.
PiloTNet uses a machine learning algorithm to understand how to drive. Nvidia has developed a technique to make the AI’s decisions more transparent by using a visualisation map which shows the visual elements the AI is tracking when driving.
Urs Muller, Chief Architect for Self-Driving Cars, reflects that “the method we developed for peering into the network gives us the information we need to improve the system.” It also gives us more confidence. I can’t explain everything I need the car to do, but I can show it, and now it can show me what it learned.”
Ethical AI – the answer is clear
Nvidia isn’t alone in focusing on developing explainable AI. A similar trend has emerged across the finance industry where consumer expectations and international regulations demand complete visibility. FICO has recognised that implementing “machine learning models into the broad lending market would likely usher in systemic risk, market confusion, and lack of transparency for consumers.”
To eliminate the problem, FICO recommends that financial institutions “focus on ultra-effective ways to augment human domain intelligence with machine intelligence to enable the rapid construction of highly predictive and explainable credit risk scoring models.”
Likewise, Matthieu Garner, SVP Data and Analytics at Equifax reports a similar strategy. “When you are looking at credit, there is a great need for you to explain your decision and it took us a long time to include these complex models in credit. The adoption of explainable AI in credit has increased so much – that is we can give reasons based on outcomes of the models.”
Explainable AI: a transparent future
It is becoming increasingly clear that the ends don’t justify the means for black box AI. In the past, insights were enough to testify to the viability of an AI platform, but in more important use cases, AI platforms must be able to show the logic behind a decision.
While explainable AI is in its early stages, it will be essential for encouraging gainer consumers trusts in more advanced applications. Until then, enterprises will have to fill in the gaps and guess which algorithms the AI has used to make a decision.
Nominations are OPEN for the Tech Leaders Awards, organised by Information Age and taking place on 12th September 2019 at the Royal Lancaster, London. Categories include CIO of the Year, CTO of the Year, Digital Leader of the Year and Security Leader of the Year. Recognise and reward excellence in the tech industry by submitting a nomination today