With AI playing more prominent roles than ever in the lives of businesses and consumers, it’s vital that AI startups gain the trust of their clients, and be transparent when it comes to explaining how data is used. Incidents such as Cambridge Analytica have led to much distrust from those outside the AI space, meaning that startups that use AI must place data privacy at the heart of their operations.
The latest roundtable from Information Age, organised alongside Venturers Club, explored how AI startups can better explain how user data is utilised, and how the misconceptions that have emerged about the fast-growing technology can be changed.
Changing perceptions of AI
For clients and consumers to truly trust AI startups with their data, they need to be clear on the capabilities that are involved, and the user involvement that they entail. This can mean changing perceptions of AI and distinguishing it from related but different technologies.
“I think there are wrong impressions, and then there are negative impressions. Impressions of AI can be wrong without being negative, for example, they are sometimes somewhat unrealistic,” said Michael Boguslavsky, head of AI at Tradeteq.
“Pessimistic views can come from unsuccessful implementations which are often doomed from the start by the lack of data. We’re lucky in that as a B2B business, we work with financial institutions that at least understand that having data in one digital place is a necessary step. That can be more difficult for companies that work directly with consumers.”
For Robert Newry, CEO and co-founder of Arctic Shores, a company that creates personality profiles to help determine if candidates are a good fit for an organisation, clients have been concerned that the psychometrics involved in the process of matching a person to a role as a ‘black box’ since the Cambridge Analytica exposé.
He said: “What we’ve found in the world of HR is that that clients don’t exactly understand what AI is, and often get AI and machine learning confused, as well as confusing AI with algorithms.
“This wasn’t helped by much of the sector, much of which weren’t truly using AI, stating that ‘everything is AI’ because it sounds cooler.
“Because of this confusion, what we’ve had to do is clarify where we use AI and why, and where we use statistical analysis and algorithms and why, and explain the differences and best practice between those areas.
“Where people get frightened is when they are presented with techniques which can’t be easily explained or justified.”
Peter Mildon, COO of Vivacity Labs, meanwhile, agreed with the notion of explainability easing the process of gaining trust when drawing from experience in applying machine learning to cameras: “We don’t store video from the cameras, and we don’t use facial recognition of any kind. It’s a completely anonymous data feed for optimising traffic lights, and we’ve received varied reactions from different groups when explaining how we operate.
“When talking to a local authority, for example, they want to understand how traffic lights will respond to a specific scenario, and once we explain the system more in terms of an optimisation algorithm and those reactions are clearly demonstrated, they become quite comfortable.”
“Then from a marketing perspective, referring to this as a ‘clever algorithm’ rather than AI tends to get you through the door, and allows conversations to be made more easily.”
Entering the discussion from the perspective of an AI startup focused on the legal sector, Thoughtriver CEO Tim Pullan said: “There is no issue with explaining the benefits of AI, and use of the technology to review contracts has proved popular.
“I have empathy with those who have been met with misconceptions from people who produce their own truths on what AI can do, and this can be wildly different.
“When we’re in that kind of situation, we tend to avoid the term AI, because it’s so unpredictable. We just focus on the benefits, and talk about what we’re doing, and why we’re doing it, with the aim of getting legal departments within large companies to safely delegate more of what they do to the business.”
Venturers Club Roundtable: how AI startups can create the right culture for talent
Establishing ethical standards and awareness
As well as explaining how processes work, AI startups can gain trust from users by adhering to ethical standards, whether these are set by the company itself or its backers.
Ofri Ben-Porat, CEO of Edgify, described his first-hand experience of the difficulties of AI regulation, which occurred before Edgify was established: “We pivoted towards Edgify from a startup that analysed photo galleries, and we launched a product two days before Cambridge Analytica came to light.
“I think that when it comes to the fear of data usage, I don’t think that GDPR has quelled these concerns from the perspective that Cambridge Analytica can’t repeat itself, but it did do the trick for us as companies to avoid worrying about standards.
“I believe that if you can withstand GDPR, you can maintain data standards, but when it comes to AI ethics, I’m not sure if it’s up to the solution provider, and we need to think about how far are we willing to allow machines to make decisions across verticals.”
Mildon went on to provide insight on his experience in leading a company that takes an ethical stance on not building personal data, stating: “I sometimes feel that we get held to a higher standard than the other supplier who is providing an ANPR camera and facial recognition deliberately.
“Nobody seems to take exception to police ANPR cameras being installed across a city, but there’s an AI on lampposts that’s been put together in a privacy-by-design way, that seems to attract more attention.”
James Walker, CEO of Jamdoughnut, shared his experience in presenting ideas about algorithms to UK regulators from various sectors, commenting: “I asked if they were concerned about the possible danger that algorithms could end up giving the wrong advice to consumers, and the simple answer from all of those regulators was ‘No, we don’t care’.
“The reason they don’t care is we have a set of regulations that exist, and they’ll enforce punishment if they are not met.”
Ethical standards are also high on the stakeholder agenda within the healthcare space, and Panakeia AI CEO Pahini Pandya has seen debates happening on whether clinical data belongs to the patient or the hospital.
“There is an inherent struggle with perceptions of data ownership in healthcare,” said Pandya.
Nonetheless, there are strengths too, “in the context of data access, there have been long-standing ethical processes in order to get access to patient data in medical research. This is great when it comes to building data-driven technologies.”
Pandya went on to state that NHSX in the UK has been making prominent strides towards establishing ethics, standardising awareness of AI and educating users on the benefits: “NHSX has a guide to building and getting AI right in terms of ethics, regulation and clinical standards.”