Since OpenAI’s breakthrough in Large Language Models (LLMs), threats to organisations have skyrocketed. That’s because whenever we make significant technological progress, bad actors also benefit: cybercriminals now use LLMs and generative AI to execute hyper-local attack playbooks at scale and obfuscate the traditional ‘tells’ that alert employees and organisations to attack.
We’re in a metaphorical arms race with threat actors to make the best use of AI. So, who is winning?
How AI boosts cybercriminal activity at scale
Historically, phishing emails held obvious clues to their malicious intent. Spelling mistakes, grammatical errors and language gaps raised red flags and alerted most readers. Now, however, AI is helping craft well-written and more convincing messages; attacks can now closely mimic local vernacular, internal corporate lingo, and professionalism. This isn’t your grandma’s spam inbox. Old tells are no longer reliable.
The same is true for scams that use voice and video recordings. AI makes impersonation much easier via ‘deepfakes’ – video, audio, or imagery that can trick existing cybersecurity defences to carry out things like financial fraud. For example, threat actors trained on a CEO’s public voice have been used to initiate wire fraud worth nearly a quarter of a million dollars.
And it doesn’t stop there: AI can help automate attacks, identify vulnerabilities, create more sophisticated malware, analyse vast amounts of sensitive data, and reverse engineer security tools. With the assistance of AI, the barrier to entry to becoming a cybercriminal just became a lot less technical. It is little surprise, then, that HackerOne’s 2024 Hacker-Powered Security Report found that 48 per cent of respondents believe AI poses the biggest security risk to their company.
How teams can use AI to combat threats
The good news is that security teams can harness AI to see off these threats, making themselves faster, smarter, and better able to spot threats and vulnerabilities at scale. One group that is using AI to help identify threats is security researchers, whose role is to actively search for vulnerabilities within software and systems and responsibly disclose threats to companies before cybercriminals can find them. Using AI as a copilot tool, like in the case of hackbots, can help researchers accelerate this process – enabling them to conduct more security testing and reach farther and deeper areas of the attack surface.
Indeed, HackerOne’s report suggests that 38 per cent of security researchers are using AI, 20 per cent see it as essential and 33 per cent are using it to summarise information and write reports. AI is behind faster and more comprehensive processes, for example, quickly expanding the word lists when trying to brute-force systems.
For security teams, AI also enables advanced behavioural analytics to flag potential attacks for faster incident response; automates threat detection in real-time; spots phishing attempts; identifies vulnerabilities; and can process large volumes of threat intelligence data to identify emerging threats and attack patterns.
AI can also automate routine tasks, such as speeding up the reading of source code. AI also plays a significant role in collaborative security. Vulnerabilities usually demand detailed technical guidance and clear instructions for remediation. AI can translate complicated industry jargon into clear, actionable steps, ensuring teams work together more effectively. All of these faster tasks and processes add up to more free time security teams can spend focused on strategically important tasks.
Who’s winning?
In such a close-run race, it can be hard to tell who is in front: the cybercriminals or the security professionals. It’s fair to say that, for the time being, the security teams hold the upper hand. However, bad actors are not far behind. Through exploring new ways AI can support security teams, organisations can ensure their defences remain ahead of the game.
Security researchers already understand how cybercriminals think, and with AI support, they can help organisations ensure they stay two steps ahead of the bad guys.
Michiel Prins is co-founder of HackerOne.
Read more
Will more AI mean more cyberattacks? – An increased use of AI within organisations could spell a rise in cyberattacks, explains Nick Martindale. Here’s what you can do
What is hybrid AI? – What is hybrid AI, what elements can you combine and how can it benefit your organisation? Read on to find out
I steer AI strategies for the UK Gov, FCA and PwC. Here’s what they’re doing right – The four approaches that the most promising AI projects have in common, to help other businesses leverage their tech securely and effectively