The A-level and GCSE exams fiasco exposes the significant problems in using algorithms to make complicated, life-changing decisions. As we rely more on AI and automation, “smart” systems are as likely to create as many problems as they solve. AI is not all bad nor will recent problems pause its growth, rather organisations continue to hit the accelerator on AI technologies on the promise of rapid delivery of value.
So, does a fast percolation of AI technologies throughout business and public sector bodies mean we can dispose of the human talent in their analytics teams? No. Experience shows people can comfortably integrate new technologies into their jobs and lives; AI and automation are no exception. AI, through appropriate algorithm usage, does have the potential to help us make better decisions in the workplace.
Embracing AI technology requires organisations to understand, from the top down, how their data and algorithms work. Human oversight is needed to avoid mistakes and ensure decisions are ethical. The A-Level debacle was caused not so much by the algorithms themselves, but by a lack of oversight and consideration of the biases of the types of input it was fed.
Being accountable for training your AI responsibly
Many organisations are implementing AI technology without thinking about the repercussions. They must design, develop, and integrate AI with a huge amount of responsibility and care to ensure everyone can benefit from these advances.
AI is susceptible to inherent and explicit bias, gaps in logic and general algorithm complexities. Bias sneaks into algorithms in many ways including human error. Referring again to the A-Level issues: how to handle small class sizes. Fewer than 15 pupils? The teacher’s recommendations were given large weight. Fewer than 5? Use the teacher’s recommended grade. Statistically, that seems sensible, because of the small numbers. But nobody thought to consider which types of schools have small class sizes: private schools. A statistician’s choice missed the human aspect of the data.
AI ethics: Time to move beyond a list of principles
Data governance: building AI integrity with data
Data professionals can play a pivotal role in making AI ethical and therefore of value. Let’s cut through the thick hype that wraps around AI and remember that garbage inputs will create garbage outputs.
Rigorous data governance is foundational for successful AI implementations. Quite simply if you don’t have good oversight of the data in your organisation that’s used to train AI or given to an AI to analyse, you always risk inaccurate outputs and decisions. Another serious example was demonstrated when an AI that depixelated photos of faces often turned black people’s faces into white faces. The algorithm itself was probably not at fault: it was basing its decisions on its training data, which contained more white than black faces.
Your lens on data governance needs to also scrutinise the rigour applied by the providers of the algorithms that are being offered to work with your data. If you are relying on the algorithms and thus AIs trained externally on external datasets, do those providers document their own data governance? What are the standards that they adhere to? If the answers are negative or unclear, how can you trust your business on their algorithms’ results. Simply don’t accept being handed AI in a closed box; demand to see inside its workings and challenge the quality of data governance used to create the AI that they are offering you.
Regulating AI is another route. Industries will need to create standardisation around development and certifications for ethical AI usage. Gartner already predicts a self-regulating association for oversight of AI and machine learning designers will be established in at least four of the G7 countries by 2023.
Business leaders and key decision makers have a responsibility to encourage progress on research and standards that will reduce bias in AI, in turn hopefully creating a more transparent and ethical technology for human collaboration.