Developed by a team led by Dr David Lopez, from the Initiative for Digital Economy Exeter (INDEX), the tool has been able to spot fake news regarding Covid-19 by detecting emotional undertones, such as fear and anger.
Named after children’s TV series Charlie and Lola, it uses natural language processing and behavioural theory to analyse 25,000 texts per minute, and has been found to have 98% accuracy in detecting islamophobia, and other hateful online language.
What is AI? A simple guide to machine learning, neural networks, deep learning and random forests!
Additionally, LOLA ranks tweets using a severity scale, from ‘most likely to cause harm’, to ‘least likely’.
Having recently been used in an experiment to pinpoint cyberbullying towards activist Greta Thunberg on Twitter, the tool could be used to bolster cyber security, as social media platforms focus on eradicating online harms.
Progress in the battle against misinformation could also be aided by confirmed collaborations with Google and the Spanish government.
“In the online world the sheer volume of information makes it harder to police and enforce abusive behaviour,” said Dr Lopez.
“We believe solutions to address online harms will combine human agency with AI-powered technologies that would greatly expand the ability to monitor and police the digital world.
“Our solution relies on the combination of recent advances in natural language processing to train an engine capable of extracting a set of emotions from human conversations (tweets) and behavioural theory to infer online harms arising from these conversations.
“The ability to compute negative emotions, such as toxicity, insults, obscenity, threat and identity hatred, in near real-time at scale enables digital companies to profile online harm and act pre-emptively before it spreads and causes further damage.”