The growth in the use of artificial intelligence (AI) is impacting businesses in many ways, but one of the most dangerous could be as a result of exposing them to cyber threats. According to Gigamon’s 2024 Hybrid Cloud Security Survey, released in June 2024, 82 per cent of security and IT leaders around the world believe the global ransomware threat will grow as AI becomes more commonly used in cyberattacks.
AI making cyberattacks more sophisticated
One of the biggest risks comes from the use of AI to create much more convincing phishing and social engineering attacks. “Cybercriminals can use tools like ChatGPT to craft highly convincing emails and messages,” says Dan Shiebler, head of machine learning at Abnormal Security. “It’s now easier than ever for a threat actor to create perfectly written and even personalised email attacks, making them more likely to deceive recipients.”
AI is also creating entirely new ways to impersonate people. Four in 10 security leaders say they have seen an increase in deepfake-related attacks over the last 12 months, the Gigamon survey finds. “Deepfake technology holds real potential to manipulate employees into sharing personal details or even sending money through false video calls, recordings and phone calls,” says Mark Jow, EMEA technical evangelist at Gigamon.
In February 2024, a finance worker for engineering firm Arup was tricked into making a payment of $25.6 million after scammers impersonated the company’s chief financial officer (CFO) and several other staff members on a group live video chat. “The victim originally received a message purportedly from the UK-based CFO asking for the funds to be transferred,” says Chris Hawkins, security consultant at Prism Infosec.
“The request seemed out of the ordinary, so the worker went on a video call to clarify whether it was a legitimate request. Unknown to them, they were the only real person on the call. Everyone else was a real-time deepfake. The most difficult deepfakes to spot are audio followed by photos and then video, and for this reason it’s vishing attacks that are the main cause for concern in the industry at the present time.”
But AI is also being deployed by cybercriminals to identify opportunities and vulnerabilities to carry out distributed denial-of-service (DDoS) attacks. “It is being used both to better profile a target for selection of the initial attack vectors to be used, to ensure that they will have the highest impact, and to ‘tune’ an ongoing attack to overcome defences as they react,” says Darren Anstee, chief technology officer for security at NETSCOUT. “These capabilities mean that attacks can have a higher initial impact, with little or no warning, and can also change frequently to circumvent static defences.”
Mind your business – and its use of AI
Organisations are also potentially exposing themselves to cyber threats through their own use of AI. According to research by law firm Hogan Lovells, 56 per cent of compliance leaders and C-suite executives believe misuse of generative AI within their organisation is a top technology-associated risk that could impact their organisation over the next few years. Despite this, over three-quarters (78 per cent) of leaders say their organisation allows employees to use generative AI in their daily work.
One of the biggest threats here is so-called ‘shadow AI’, where criminals or other actors make use of, or manipulate, AI-based programmes to cause harm. “One of the key risks lies in the potential for adversaries to manipulate the underlying code and data used to develop these AI systems, leading to the production of incorrect, biased or even offensive outcomes,” says Isa Goksu, UK and Ireland chief technology officer at Globant. “A prime example of this is the danger of prompt injection attacks. Adversaries can carefully craft input prompts designed to bypass the model’s intended functionality and trigger the generation of harmful or undesirable content.”
Jow believes organisations need to wake up to the risk of such activities. “These services are often free, which appeals to employees using AI applications off the record, but they generally carry a higher level of security risk and are largely unregulated,” he says. “CISOs must ensure that their AI deployments are secure and that no proprietary, confidential or private information is being provided to any insecure AI solutions.
“But it is also critical to challenge the security of these tools at the code level,” he adds. “Is the AI solution provided by a trusted and reputable provider? Any solutions should be from a trusted nation state, or a corporation with a good history of data protection, privacy and compliance.” A clear AI usage policy is needed, he adds.
What can I do to reduce the threat?
There are other steps organisations can take to reduce the risk of being negatively impacted by AI-related cyber threats, although currently 40 per cent of chief information security officers have not yet altered their priorities as a result, according to research by ClubCISO.
Educating employees on the evolving threat is vital, says Hawkins, but he points out that in the Arup attack the person in question had raised concerns. “Employee vigilance is only one piece of the puzzle and should be used in conjunction with a resilient data recovery plan and thorough Defence in Depth, with large money transfers requiring the sign-off of several senior members of staff,” he says.
Ev Kontsevoy, CEO of cybersecurity startup Teleport, believes organisations need to overhaul their approach around both credentials and privileges. “By securing identities cryptographically based on physical world attributes that cannot be stolen, like biometric authentication, and enforcing access based on ephemeral privileges that are granted only for the period of time that work needs to be completed, companies can materially reduce the attack surface that threat actors are targeting with these strategies,” he suggests.
The bottom line is that organisations will need to draw on a variety of techniques to ensure they can keep up with the new threats that are emerging because of AI. “In the coming years, cybercriminals are expected to increasingly exploit AI, automating and scaling attacks with sophisticated, undetectable malware and AI-powered reconnaissance tools,” points out Goksu. “This could flood platforms with AI-generated content, deepfakes and misinformation, amplifying social engineering risks.
“Firms not keeping pace risk vulnerabilities in critical AI systems, potentially leading to costly failures, legal issues and reputational harm. Failure to invest in training, security and AI defences may expose them to devastating attacks and eroded customer trust.”
Read more
The importance of disaster recovery and backup in your cybersecurity strategy – A strong disaster recovery as-a-service (DRaaS) solution can prove the difference between success and failure when it comes to keeping data protected
Can NIS2 and DORA improve firms’ cybersecurity? Daniel Lattimer, Area VP at Semperis, explores NIS2 and DORA to see how they compare to more prescriptive compliance models
The changing role of the CISO – The cybersecurity head of any organisation has moved from being purely tech and reactive to someone forward-thinking and strategic. Lamont Orange looks at how to navigate the changing role of the CISO