How can businesses stop AI from going bad?

The initial problem with implementing artificial intelligence (AI) into business functions is comparable to any form of task delegation.

Relinquishing complete control over a process and its minutiae is intimidating, but necessary to grow a business. This is something that most mangers in any organisation are well-practiced in. Of course, delegating to AI rather than hired staff also brings with it all of the time-saving, efficiency-driving benefits but there are certain risks.

Popular films have already set the stage for reservations about AI. Unlike Sci-Fi antiheros The Terminator or the computers in The Matrix where the technology specifically targets humans as a common enemy, real-life examples of defiant AI cause more concern about existing discrimination issues. Microsoft’s Tay chatbot infamously displayed discriminate attitudes after only a short period of exposure. More recently, Apple, the consumer tech company which prides itself on “simplicity, transparency and privacy”, demonstrated that progress is still needed when the Apple Card offered significantly higher borrowing limits to men than to women from identical financial backgrounds, and Apple had no immediate explanation.

Therefore, it’s no wonder consumers need convincing. In fact, in a recent Pega survey a majority of customers (70%) confirmed they still prefer human customer service, so it’s clear there is a huge lack of trust in AI. But brands must overcome this issue in order to reap the myriad benefits of this technology and free up these benefits.

Customer service and the machine era

How can analytics, automation and artificial intelligence impact the customer experience in the machine era?

So, how can CIOs avoid replicating mistakes of the past and stop AI going bad in the future?

Starting from the very beginning of the process, CIO’s can help AI be “good” by ensuring that the data being used to create the algorithms is ethical and unbiased, itself. Gathering and using data from ethical sources significantly reduces the risk of harbouring toxic datasets which may infect systems with problematic biases further down the line.

This is especially crucial for highly regulated industries, which will need to identify biases already present and remedy accordingly. Using insurance as an example, CIO’s should take care not to include data that heavily features one particular demographic, gender etc., which might augment averages and inform non-representative policies.

Collecting a rich sample of ethical, GDPR compliant, representative data from consenting customers actually benefits the accuracy of the AI it powers, and it also reduces the work needed to “clean” it. CIO’s should be screening the data they use for inconsistencies and errors before implementing into a functioning AI system, but when obtained from a reputable and reliable source in the first place this process is a much less daunting task.

GDPR — How does it impact AI?

Now that GDPR is over one year old, Eric Winston, from Mphasis looks at the interaction between AI and the GDPR.

Once businesses can be confident that its data is reliable further ethical guidelines can be put in place at each stage of the decisioning process. Rather than relying an opaque “black box” machine of inputs and outputs with no real insight into how outcomes are arrived at, CIO’s should leave nothing to chance. Greater controls which map AI decisioning provides the data transparency needed to win customer trust.

Customers build trust around reason. This is why businesses will need to provide customers with an end-to-end explanation of how its AI arrived at its decision. Once customers have proof that the AI can produce ethical decisions, they can be rest assured that the likelihood of the AI going bad is much lower, in turn helping to build trust in the company.

How can businesses uphold trust in a zero-trust environment?

Ian Smith, founder and CEO of Gospel Technology, explains to Information Age how businesses can be collaborative and data-driven without risk.

Moreover, have the right technology in place to enable access a detailed record of each decisioning process makes errors far easier for the CIO to capture and remedy. In instances where the AI might not be performing as intended, businesses can gain advantage in restoring reputation if they can identify the offending issue, confirm that they have addressed it and demonstrate a commitment to improving their AI governance moving forward.

On the flip side, AI also helps to remove human error which could create a scenario of poor customer service. When human customer service agents fail to resolve a customer issue, AI can pick up the slack, or perhaps limit unethical decisions such as approving high interest loans for low-income families.

If businesses can successfully demonstrate that their AI definitely won’t go bad it can be used as a means to build a more sustainable, mutually beneficial customer relationship. Not only does AI technology augment efficiency and reduce human error, with ethical guidelines and responsible use of data AI can also improve the success rate of sales and increase profits by ensuring relevance to specific customers.

Written by Lee Whittington, 1:1 customer engagement specialist at Pegasystems.

Lee Whittington

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com