Explainable AI : The margins of accountability

You’re pacing along a hospital corridor, holding your child’s hand. She is lying sedated on a gurney that’s bumping towards the operating theater. It squeaks to a halt and a hurried member of hospital staff thrusts a form at you to sign. It describes the urgent surgical procedure your child is about to undergo—and it requires your signature if the operation is to go ahead. But here’s the rub—at the top of the form in large, bold letters it says “DIAGNOSIS AND SURGICAL PLAN COPYRIGHT ACME ARTIFICIAL INTELLIGENCE COMPANY.” At this specific moment, do you think you are owed a reasonable, plain-English explanation of all the inscrutable decisions that an AI has lately been making on your daughter’s behalf? in short, do we need explainable AI?

There are many other examples where one or more of the actors may consider themselves entitled to an explanation of the reasoning processes behind the decisions of an AI. What about the use of AI to prioritize targets in the modern battlespace? Or when an AI becomes involved in the criminal justice system? These scenarios are not the stuff of science fiction; they are business-as-usual today and are in the vanguard of the emerging explainable AI (XAI) movement.

Applying machine learning to products — Tessian CTO imageApplying machine learning to products — Tessian CTO

Tessian creates machine intelligence to secure enterprises from threats executed by people, to keep sensitive data and systems private and secure

The drive towards explainable AI has been gathering momentum for some time. At its core, it’s all about trust. How much can anyone trust a recommendation from an AI? What if that recommendation involved a high-stakes choice? These are, at least partially, social issues, which are motivated by organizations who are eagerly awaiting a staggering $16 trillion tidal wave of AI solutions that’s said to be on its way.

Trust in me

In PwC’s 2017 Global CEO Survey, 67% of business leaders believed that AI and automation will negatively affect stakeholder trust levels in their industry in the next five years.

People are happy to trust an AI when they don’t have too much skin in the game—say, when they’re asking for a movie recommendation. But with high-impact decisions, such as medical diagnosis, they are much more discerning. Crucially, there’s a tension and balance between getting the best decisions and getting the best explanations. The grotesque tragedy is that the most successful algorithms (recursive neural nets etc.) are the least good at explaining themselves. Some experts would go as far as to say that it’s this very quality that gives them the ability to achieve the best results, this making explainable AI, a challenge, to put it mildly.

Tension and balance

Although in reality there are many more intermediate technologies than are shown, diagrams such as the one below are often used to give an intuition about the tension and balance between predictive power and explainability.


Data source: DARPA

We are entering a new age of technology with modern machine learning at its core—and it’s only been possible since massive compute resources have become relatively inexpensive and widely available. Sadly, we can’t have our cake and eat it—high-performing machine learning models are entirely opaque, non-intuitive, and difficult for people to understand—even experts, let alone lay people. It’s partially because such systems have no explicit knowledge representation—knowledge is distributed throughout their myriad interconnections in configurations that deny our current human abilities to see the patterns in them.

Not to be too pessimistic, and depending on the technology, there are still some interesting research approaches:

  • The RETAIN model, developed at the Georgia Institute of Technology, was introduced to help medical professionals understand the reasoning behind a model’s predictions of patient heart failure risk.
  • Local Interpretable Model-Agnostic Explanations (LIME), in which the inputs for a model are perturbed and its outputs observed, leading to insights about which inputs are the strongest predictors.
  • Layer-wise Relevance Propagation (LRP), which applies to image recognition tasks in which the contributions of single pixels can be assessed with regard to the model’s predictions for various image classifiers. Its insights, however, like the others, can only be interpreted by a human expert.

All of these techniques and other research initiatives can be said to combine towards the goal of changing the diagram so that each technology seeks improvements in both power and explainability. Explainable AI is thus supported.


Data source: DARPA

Smoke and mirrors?

Some organizations — rather unhelpfully in our opinion — sense a bandwagon effect and are making some bullish claims:

Any organization claiming to have been pioneering Explainable AI for over 25 years has to be finessing things a little. What is certain is that whatever it was that was being explained 25 years ago bears no relation to what we are trying to explain today.

Explanations, explanations and explainable AI

DARPA envisions “… the development of new or modified machine-learning techniques that will produce more explainable models. These models will be combined with state-of-the-art human-computer interfaces techniques capable of translating models into understandable and useful explanation dialogues for the end user.” It probably makes sense to think of things this way, but it sounds quite some distance from a single concise and information-rich explanation statement.

Investing in AI: What businesses need to know imageInvesting in artificial intelligence: What businesses need to know

To make artificial intelligence work for a business, leaders need to ensure that employee skills are honed in line with technological investments

The key point is that neural networks are fallible, and we don’t really know why they make the choices they do. As algorithms become more complex, the task of producing an interpretable version becomes more difficult. Hence DARPA’s belief that explanation dialogues are a necessary pit stop on the route to understanding.

When a contrarian might have a point

Peter Norvig—director of research at Google—offered what is to some, a surprising commentary on XAI, when he cast doubt on its value. At a University of New South Wales event in 2017, he remarked “You can ask a human, but you know, what cognitive psychologists have discovered is that when you ask a human you’re not really getting at the decision process. they make a decision first, and then you ask, and then they generate an explanation that may or may not be the true explanation.”

Increasing the adoption of ethics in artificial intelligence imageIncreasing the adoption of ethics in artificial intelligence

Digital Catapult, the innovate UK backed centre for digital innovation, has unveiled plans to increase the adoption of ethics in artificial intelligence

“Besides”, said Norvig, “Explanations alone aren’t enough, we need other ways of monitoring the decision making process.”

We live in interesting times!

The finger and the moon

Some interesting imagery comes from the Shurangama Sutra, a core text for Buddhist training.

“Imagine someone is trying to show you the moon by pointing at it.”

If someone points to the moon, don’t just look at the finger, because you’ll miss the moon, and you might even think that the finger is the moon.

Maybe some bright spark will have a similar revelation when it comes to interpreting the cryptic hints offered by modern AI systems. Maybe one day, we will learn to look directly at the moon itself and become awestruck by its beauty.

Information about the author

Yaroslav Kuflinski is an AI/ML Observer at Iflexion. He has profound experience in IT and keeps up to date on the latest AI/ML research. Yaroslav focuses on AI and ML as tools to solve complex business problems and maximize operations.

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com

Related Topics