Improving understanding of machine learning for end-users

Machine learning has seen use cases rise as more and more sectors, from healthcare to insurance, leverage it to increase the speed of operations. Research from Algorithmia found that 76% of enterprises are prioritising AI and machine learning in their 2021 IT budgets. But despite the technology being more pervasive than ever within enterprises as well as in everyday life, end-users can still feel left in the dark as to how machine learning works, and its value can be unclear to business leaders.

“When we first saw the introduction of ML into the enterprise, I used to think it was a badge of honour to have this black box that we couldn’t explain,” said Ed Bishop, CTO of Tessian.

“But I think we’re way past that hype cycle now, and for me, it will always be harder for any piece of software that needs to interact with the end-user, but can’t explain its predictions, to be successful. Machine learning should feel like a partner that works with you, not against you.”

This article will explore how vendors can improve the understanding of machine learning for end-users, ensuring that staff are kept in the loop, and value can be properly comprehended.

How AI is transforming enterprise operations and customer experience

With enterprise operations and customer experience evolving in the wake of the Covid-19 pandemic, we explore the roles that AI has played. Read here

Explaining the process

Firstly, machine learning processes need to be explainable. With the vast majority of models being trained by human employees, it’s vital that users know the information it needs to provide for the goal of usage to be reached, so that alerts of any anomalies can be as accurate as possible.

Samantha Humphries, senior security specialist at Exabeam, said: “In the words of Einstein: ‘If you can’t explain it simply, you don’t understand it well enough’. And it’s true – vendors are often good at explaining the benefits of machine learning tangibly – and there are many – but not the process behind it, and hence it’s often seen as a buzzword.

“Machine learning can seem scary from the outset, because ‘how does it know?’ It knows because it’s been trained, and it’s been trained by humans.

“Under the hood, it sounds like a complicated process. But for the most part, it’s really not. It starts with a human feeding the machine a set of specific information in order to train it.

“The machine then groups information accordingly and anything outside of that grouping is flagged back to the human for review. That’s machine learning made easy.”

Mark K. Smith, CEO of ContactEngine, added: “Those of us operating in an AI world need to explain ourselves – to make it clear that all of us already experience AI and its subset of machine learning every day.

“From the search engine you just used, to the conversations a brand just started with you – that is very often AI-enabled. But just like scientists owe us an explanation about how a vaccine works and how it will not include a tracking device – so do people like me and mine need to visualise and explain our AI.

“It is a work in progress, but it must be done to ensure the trust is not broken with end-users.”

Can we automate data quality to support artificial intelligence and machine learning?

Clint Hook, director of Data Governance at Experian, looks at how organisations can automate data quality to support artificial intelligence and machine learning. Read here

Clarity and transparency

Ensuring trust from end-users when it comes to machine learning not only requires clarity, but also transparency. When anomalies are found by the algorithm, the means of finding the unusual activity, as well as why exactly it’s unusual, should be made clear to the user.

At Tessian, for example, machine learning is put in place to detect possible mistakes made inadvertantly by employees sending emails, before the message is sent. While finding and informing users of such errors is important for security reasons and minimising communication breakdown, it’s just as critical that alerts are clearly justified in non-technical language.

Bishop continued: “We look to prevent human error on email, such as sending an email to the wrong person. Machine learning may make that prediction, but end-users don’t need to know that.

“What I want as an end-user is a great user experience, and I also want to feel empowered and part of that process, rather than feeling like a machine is telling me what to do or getting in the way.

“In our case, we don’t just want to be told that a URL looks unusual, but I want to understand why it’s unusual. An example of this can be that no one in the organisation has visited that URL.”

Strangers in your inbox: safeguarding against business email compromise

Tim Callan, Senior Fellow at Sectigo, explains to Information Age how to safeguard against business email compromise. Read here

In addition, going back to the current and growing pervasiveness of the technology, trust comes from end-users being able to distinguish an application or service that is sophisticated by design, from one that’s powered by machine learning.

“Largely, AI and ML today are powered by examples, that is past interactions are captured and then used to assist in future interactions,” said Greg Benson, chief scientist at SnapLogic, and professor of computer science at the University of San Francisco.

“When a computer system is providing assistance for a take using AI/ML it is important for the end user to understand the provenance of the assistance. For example, Amazon makes it clear that when suggesting alternative products that the suggestion are based on your browsing and purchase history.

“As an increasing number of computer systems employ AI/ML, it will be ever more important for end users to understand why the AI/ML is providing a recommendation or prediction to establish trust with a particular application.”

Training users and tools

Also to be considered is effective training of not only the users, but also the tools required to put machine learning in place. While employees who are unfamiliar with how it works need to be brought up to speed in order for results to be effective going forward, and machine learning tools need to be taught to better interact with users.

Why digital transformation should begin with employee training

Digital transformation is a buzzword for every business, but only 5% of companies say that they reached their digital transformation goals. Read here

“Users need to be taught what questions they should ask of data and how algorithms are used,” said Franki Hackett, head of audit and ethics at Engine B.

“Efforts like the UK, German, Dutch, Norwegian and Finnish Audit Office’s whitepaper on machine learning governance are helpful in this area because they set out standards for all the processes and functions that should be in place to use machine learning effectively.

“Explainability is key, and tools should explain both the logic of how they work and the data they are using. Machine learning tools need to be carefully designed so they are doing what the user thinks they are doing.

“The role of good designers and expert translators between subject area and technical people is crucial here.”

Bishop added: “At Tessian, we feel that explainable machine learning can be a form of training. It isn’t just about building trust and empowering end-users to do their best work, but also educating users about any unusual behaviour.”

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.