In a recent report, the UK Government recognised that the “advances in robotics and artificial intelligence (AI) hold the potential to fundamentally re-shape the way we live and work.”
New skills will be required to meet both its challenges, and opportunities, as was concluded in Parliament’s recent Science and Technology Committee report.
As AI becomes increasingly prevalent it will also need to respond to the social and ethical dilemmas that it poses and be able to provide transparency.
This will become particularly relevant with the upcoming European Union’s new General Data Protection Regulations (GDPR) that comes into effect in 2018, which will give individuals a “right to explanation” for all automated decisions made about them.
The current reality is that automated decisions are already made about individuals all the time and very often by technology that cannot provide any reasoning for those decisions.
The problem is, many current applications of automated decision making is not able to provide a rationale for how a decision has been reached.
>See also: Artificial intelligence: a force for good or bad?
Going forward, businesses using this technology will need to identify where and how those decisions are being made.
Therefore, adopting the right technologies to manage this process will allow a structured view to be taken and will bring transparency to the data analysis.
Defining automated decisions
Ultimately the benefit of automated decision making is about scalability and consistency in decision-making.
Humans are fallible. People are very good at certain tasks and some that aren’t so good. So automation is often the answer because it can enable organisations to bring their staff up to the same standard.
To truly understand the need for transparency in relation to the future of AI-powered decision making, it’s important to understand what defines automated decisions and automated decision-making.
In fact, finding a definition for them is not easy because there is a cross-over of terminologies involved in AI.
However, a good definition would cover any kind of decision-making that would have been made by a human, that is now be made by a machine.
The importance of transparency
In addition to transparency, accountability, auditing and liability should also be a prerequisite consideration for automated decision-making.
Transparency is particularly important because it supports business accountability and audit, as well as helping to verify, or better still prevent, an organisation’s liability.
In essence it is about risk management. So if you imagine automated decisions were being made for all aspects of our lives, then you have a scenario where those decisions can be made audible.
>See also: How artificial intelligence is driving the next industrial revolution
Automated decision-making is being used in all sorts of areas – from financial services to healthcare – but in most current uses, you still need a human expert to verify the decision that the machine has made.
A modern example would be where AI is being used for cancer treatment, making recommendations based on medical records.
At this juncture it’s also worth noting that McKinsey predicts that over the next 10 years, Knowledge Work Automation will be worth between $5-7trillion to the global economy.
Part of this will undoubtedly be new technology that enables AI technology to make decisions. The need for transparency will see AI-powered cognitive reasoning shape the future.
Regardless of how the market shapes, it will be important to understand the ramification of AI, automated decisions and GDPR.
Furthermore, the need to provide increased transparency will have a major impact on existing technologies that are not able to provide reasons, causing potential issues for those that have already embedded these in to the fabric of their organisation.
Why AI needs transparency?
The areas where transparency is obviously needed is where there are implications to risk and liability, as well as where there is an ombudsman to oversee regulatory compliance.
AI being able to support the need for transparency, will expand the use cases, provide better support and deliver better accountability and efficiency for organisations.
However, let’s also not forget that transparency is also about trust.
Trust in AI is a big problem, and the only way to create trust is to make the technology transparent in everything it is doing – and that includes how it is used to make automated decisions.
Ultimately, the technology itself should be there to support, and not replace, people.
>See also: Artificial intelligence is aiding the fight against cybercrime
The problem is that people tend to ignore the technology; so they do it the way they think it should be done.
Yet, if they knew why the machine has chosen to do something in a particular way, then they would be more likely to accept and embrace it as an invaluable tool rather than seeing it as a threat or unnecessary.
Maintaining the human in the loop
It has been established why the industry moving towards the need for transparency. For AI to be welcomed with open arms this is absolutely what is needed.
However, as with anything to do with change, people’s fear and apprehensions are likely to be an obstacle.
With AI there has been a fear that employees will lose their jobs, but there is evidence that smart applications will free staff to move in to more intellectually stimulating jobs that will always require the human touch.
The thing to remember with automated decision making is that the human is there at the start of system’s design, and so is within the loop from the very start.
Starting with human knowledge will always mean that the human is needed in loop, and is therefore essential to ensure that the right decisions are made, transparently, and that they can be accurately justified.
Sourced by Ben Taylor, CEO of Rainbird