Last summer saw a grim landmark in the history of artificial intelligence: the first death caused by an autonomous vehicle. A Tesla Model S, operating on full autopilot, crashed into the back of a truck on a Florida highway, killing the human “driver”.
According to the company’s blog, the cause of this tragic accident was the AI’s inability to pick out “the white side of the tractor trailer against a brightly-lit sky“. As Tesla points out, this was the first fatality in over 130m miles driven on autopilot, which is less than half the worldwide rate of 60m miles per death for human drivers.
If Tesla and other autonomous car manufacturers can keep this rate down, it will certainly represent a major safety improvement. Yet the nature of this accident – one which would almost certainly have been avoided by a human driver – raises serious questions about how much agency humans should surrender to machines.
>See also: 5 challenges of intelligent automation at scale
As technology become more intelligent, it will inevitably be required to make decisions that result in people’s deaths. Vehicles such as autonomous cars and trucks will have to be programmed to make moral decisions, such as instantly answering the “trolley problem” which has confounded ethicists for many years.
Not every question about AI and automation is quite a matter of life and death, but there are important practical and ethical questions to answer before we can be sure that these technologies will augment rather than diminish our lives.
This is a fact increasingly recognised by businesses themselves. Research from Infosys found that over half (53%) of respondents feel that ethical questions are preventing AI from being as effective as it could be, while only a third (36%) say that they have fully considered the ethical implications of these new technologies.
These concerns range from the effects of these technologies on employee and customer privacy, to the impact on employment (for example, through making large swathes of the human workforce redundant). There are also questions – starkly highlighted in the Tesla story above – about how far artificial intelligence can ever replace human operatives.
It’s as easy to be seduced by the promised power of technology as it is to romanticise the unique capabilities of the human brain. Unless businesses can reconcile these two very different intelligences, they are unlikely to unlock the full benefits that can be achieved by getting humans and technology to complement each other.
>See also: 2018 will be the year of automation in enterprise
One of the key challenges facing humanity is how much autonomy we decide to give to machines, and in which situations artificial intelligence is better applied than the human mind.
At first glance, technology’s ability to crunch through enormous data sets and harness natural language processing to provide realistic interactions with customers makes them the perfect replacement for slow, expensive and error-prone humans.
But that is to ignore technology’s own innate fallibilities, from Microsoft’s Tay chatbot, which quickly learned how to be racist, to the self-driving car that cannot distinguish between the sky and a 20-ton piece of fast-moving metal.
The fact remains that machines can still only provide us with insight, and that only people have the wisdom to apply this insight appropriately for any given business or personal context. If people’s lives were always governed by rational, empirical considerations, then there might be an argument for machines replacing humans wherever possible.
As we all know, however, this is simply not the case. Infosys is a company that runs into these questions every day. It helps a large number of enterprises with machine learning initiatives, and what rapidly became clear to us is the importance of contextualisation.
In most cases, crunching numbers simply isn’t enough; to be effective it must be accompanied by a thorough understanding of the company’s culture and business model, its unique challenges and data / reporting structures.
>See also: More devices, more problems? Not with IT automation
Humans will continue to have a vital role to play in asking the right questions; interrogating, interpreting and applying the data; and drawing up the right hypotheses for the machines.
Technology evangelists often neglect to mention that the accuracy of any machine learning algorithm is typically 60-70% in the first pass – this has to be followed by fine tuning and further work.
So far, this sounds like merely an argument for hiring more data scientists, rather than a paean to the ability of human workers. But it shouldn’t take a great leap of the imagination to look at the limitations of technology mentioned above, and see how these apply to other areas of an organisation.
These range from the very top of the tree to the most junior worker. Ask any successful business owner and they will tell you about the importance of taking calculated risks based, to a certain degree, on intuition; they will talk about the importance of lateral thinking to solve complex business problems; they may stress how their best ideas and flashes of inspiration are sparked from completely unexpected patterns of thinking.
For an HR worker, data will give important insight into individuals’ performance and productivity; it likely won’t tell them why they are performing poorly, or the most empathetic and effective way to conduct conversations about how that worker can improve.
Similarly, a customer service operative will benefit from having access to all the data they need to investigate the complaint, but it will still require training, intuition and humanity to resolve the matter in a way that leaves the customer contented.
>See also: Automation: a network necessity
The truth is that data is merely a tool that people can use to improve the way they do their job, whether it is setting up a hedge fund or dealing with a dissatisfied customer.
Businesses cannot afford to ignore the riches represented in their human capital: those fallible creatures with their unique ability to dare and to dream, to intuit and take risks and, above all, to recognise when they are headed towards catastrophe.
Sourced by Jonathan Ebsworth, head of Disruptive Technologies at Infosys Consulting