What is explainable AI?
Explainable AI is the idea that an AI algorithm should be able to explain how it reached a conclusion in a way that humans can decipher. There’s a well-known “black box problem” where an AI algorithm can determine something but is not able to give details about the factors that caused that result. Then, how can people feel that they can authoritatively trust what an AI algorithm says?
Getting to the goal of explainable AI is an admirable and necessary feat, but it’s not straightforward. Here are four things that the tech industry needs to realize before we get to the point of making explainable AI a reality.
1. Explanations may not be the best way to build trust
It’s understandable why people assert that if AI can explain itself, members of the general public, as well as businesses who use AI to make decisions, will believe it’s more trustworthy. But, some individuals are of the opinion that working towards explainable AI is not the best way to go about creating that enhanced trust. Rather, testing and analysis could give the answers sought.
That’s because the methods AI uses to draw conclusions may be so complex that humans could not grasp them even if provided with explanations. But, trust could instead come about when AI users can test and analyze decisions made by AI and ensure they show enough consistency to show that the system works as expected.
Explainable AI will still be useful if it happens, but people should not see it as the be all and end all that unlocks the mysteries of artificial intelligence.
Explainable AI : The margins of accountability
2. Explainable AI directly relates to application design
Another thing that’s crucial to realize about explainable AI is that, contrary to what most people believe, we don’t need to look at AI as a starting point for getting the answers. In contrast, making progress with explainable AI means starting at the application level.
For example, an AI app used to facilitate a loan approval process would ideally function in such a way that allows a person to go back through each step the AI tool took and the path created through those actions. Then, a person could potentially drill down and see which characteristics of a person’s application triggered an approval or denial.
Taking that approach does not necessarily bring about fully explainable AI for the given application. But, keeping explainability in mind is an excellent strategy for people involved in building applications that use AI.
With or without human-level intelligence AI has finally come of age
3. Reinforcement learning is more complex than people think
AI algorithms learn in several ways. There’s supervised learning, which involves the AI becoming smarter from pre-classified data to look at and categorize. Alternatively, unsupervised learning happens when AI algorithms scrutinize the input data to find patterns within it. Finally, there is a much-misunderstood third type of AI learning style called reinforcement learning. Some people refer to it as “self-learning.”
That kind of algorithm understands actions it carries out as being preferable or undesirable based on the feedback received. It pays attention to the responses caused by what it does, then learns from a trial and error approach.
Unfortunately, reinforcement AI is not so straightforward considering that success in any complex task only happens due to the sum of all actions. With that in mind, people must account for some required conditions that make reinforcement AI effective. For example, the system must deliver a concrete reward. But, different scenarios have different associated desirable actions.
Moreover, any system that uses reinforcement learning appropriately must account for every available variable in the environment. But, when people consider the undeniable complexity of many real-world problems, it’s easy to see how that could be an exceptionally time-consuming step to take.
When people strive towards the goal of Explainable AI, they need to remember that AI algorithms learn in more than one way. Additionally, reinforcement learning is particularly tricky because of all the variables at play. Sorting out all the specifics that describe why a reinforcement learning algorithm concluded something could be a prohibitively lengthy process.
4. All-encompassing conversations about explainable AI should include an ethics component
When people talk about AI, ethical dilemmas often come into play. That’s especially likely to happen when using artificial intelligence for matters that could result in injury or death, such as to bolster a country’s defense forces. Even though AI machines may eventually be smarter than the people who created them, humans still have a responsibility to engineer AI-based machines that behave as ethically as possible.
Some people even argue that getting to the milestone of explainable AI means we hold machines to a higher standard than humans. After all, when people ask us to fully explain the decisions we make and how we arrived at them, such things are often impossible. Fleeting ideas and primal urges are two of the things that can cause people to carry out behaviors before they consciously realize the reasons behind them.
But, what makes a machine behave ethically versus non-ethically? Programming almost certainly comes into play, but it likely doesn’t tell the whole story. So, although making AI explain itself is crucial, there should also be a laser-sharp focus on making AI ethical — even in the days before explainability.
A Long road ahead
This list highlights some of the reasons why many people love engaging in discussions about explainable AI, but they often fall short in coming up with ideas for making it happen.
It’s nonetheless a valuable goal that will continue to remain a prominent quest as people progressively develop AI and achieve new things with it.