We’re living in an age when online safety has extended well beyond DDoS attacks, phishing and ransomware
In recent years, professional disruptors and even nation states have turned to spreading disinformation and fomenting a sense of “post-truth” to advance their agendas. When the tech community doesn’t intervene, the results are horrific, ranging from bullying and violence to massive public health risks and even coups d’état.
The internet has evolved from a mere “information superhighway” into a virtual town hall, with community discourse at the heart of our experiences. While social platforms are home to the lion’s share of this discussion, it’s taking place in an increasingly decentralised landscape, with dating apps, Telegram groups for every topic, Discord servers for esports communities, and even file sharing repositories all serving as hubs for vibrant conversations.
If your technology encompasses any type of community component, then this is no longer an issue that you can ignore, because to do so is to put your audience’s lives at risk – not to mention your brand reputation.
As an extension of smart, proactive moderation from humans, new AI and machine learning algorithms are revolutionising how platforms protect their “online integrity” and that of their customers. Noam Schwartz, CEO of ActiveFence, is an expert in this area, and his company aims to make the Internet safer.
Schwartz was inspired to act after learning how online platforms struggled to keep themselves free of paedophilic content. “Online harms, if unaddressed, can lead to real-world violence, like the shocking events of January 6th that transpired after years of online disinformation and radicalisation,” he said.
Yet Schwartz appreciates that many challenges lie ahead, walking us through the key issues platforms need to be able to counter effectively.
Evil comes in many forms
Safety would be easier to achieve if there was only one type of problematic behaviour online, but there are so many different categories in places you don’t expect. It’s become more difficult for consumers to protect their privacy when there’s so much software beyond the layperson’s understanding.
Over a decade ago, a Cambridge Analytica-linked firm abused platforms to deceive people who held too much trust in what they saw online, swaying an election in Trinidad by encouraging people to abstain from voting, ultimately leading to the opposition party winning. It was made to look like a natural resistance movement, but it was engineered through corrupt practices.
Coronavirus disinformation online has been a major battleground in the last few years. It’s hard to estimate how many lives have been potentially lost because people trusted unverified sources. The need for platforms to moderate user-generated content has never been more severe.
Schwartz points to the importance of detecting issues early, saying, “If harmful online activity is left unchecked, its reach can grow rapidly and fester, exposing countless users to violent, extremist, or misleading content.” On the subject of COVID-19 disinformation, he adds, “Stemming the rise of harmful online narratives before they gain traction is critical to protecting users who might be prone to such content.”
Rapid response, then, is the name of the game. “To detect problems early, ActiveFence’s specialised human intelligence operatives use the company’s contextual AI systems to detect harmful online chatter,” explained Schwartz. “We are able to issue early warnings to our partners so they can take the necessary steps to secure their users and products.”
Keeping things clean at scale
At an individual level, it can feel intimidating to ensure all your passwords are secure, but for companies, the surface area of attack is even larger. To try to filter through the mass of inbound traffic is a mammoth task that needs help from artificial intelligence.
Contextual AI is a major weapon in the arsenal of those trying to defend the integrity of online platforms.
“Contextual AI processes and interprets information in the same intuitive way as we do,” said Schwartz. “It assesses content by taking into consideration information about the author, the environment where the material is posted, and its intended target audience.”
He added, “We have a saying here: ‘It’s context, not just content that counts.’ Contextual AI enables us to swiftly process and sort through vast amounts of data, and it enables our human teams to focus processing efforts on focused high-risk content.”
This approach echoes Facebook’s well-publicised policy of filtering out “coordinated inauthentic behaviour.” While posted content may be problematic unto itself, it’s by identifying concerted, organised efforts to distribute the content that tech companies can most effectively stop malicious actors.
The language problem
With many tech platforms focusing on the anglosphere, it’s easier for disinformation in languages other than English to slip through the cracks.
Many smaller host platform companies – and even larger ones aggressively operating in smaller markets – struggle to dedicate the resources necessary to effectively monitor non-English content. To do it well requires companies to hire moderator teams in many languages, which is unfeasible for many, especially if local governments are content to allow online integrity violations.
This is where Schwartz attempts to form a bridge, as ActiveFence covers over 80 languages. They can ensure their AI systems have the best possible inputs. He described his thinking as a “deeply nuanced approach to localised content moderation, making sure to hire talented local experts with a firm grasp of the target country’s cultural overtones, local tropes, and idiomatic expressions.”
Even the most sophisticated algorithms can only detect what they’ve been trained to notice, and the splintering of regional lingo makes human expertise essential. “These experts offer guidance to our core intelligence specialists and help train our systems to best secure users across the globe,” said Schwartz.
Everything is interconnected
Malicious agents, meanwhile, are themselves becoming increasingly sophisticated in their strategies and tactics. Siloed teams focusing on one issue at a time can fail to understand the scope of a given problem.
Schwartz believes that by “recognising cyber security as one component of a host of trust and safety measures, our clients are empowered to make decisions that best ensure a safe user experience.”
Attackers will use the full range of tools at their disposal to achieve their aims, and those aiming to protect need to be ready for anything. This is why it’s so important to look beyond mere cyber security to all different manners of defence.
“While most of the cyber security companies protect people’s machines and servers,” Schwartz said, “we also protect the people from types of harm that do not involve code-based attacks.”
AI-based systems for safety at scale
The threats to online security are only going to become more and more complex. When the potential rewards for those breaking online integrity are so high, we can be sure many are plotting against the protections we have today. AI-based machine learning solutions like ActiveFence will become crucial in giving both companies and individuals the peace of mind to know they are safe online.