Large language models powered operating generative artificial intelligence (AI) capabilities have seen substantial investment in recent times, capturing the imagination of nearly all business and tech departments across the organisation. However, cybersecurity teams have had to take rising usage of large language models by threat actors into account when optimising their strategies, as well as staying wary of internal dangers.
Here, we explore the risks that LLMs can bring to business security, the ways in which cyber attackers are using this technology, and how security teams can effectively keep AI-powered attacks at bay.
Keys to effective cybersecurity threat monitoring — A strong cybersecurity threat monitoring strategy that evolves with current and prospective threats is crucial towards long-term company-wide protection.
Dangers to consider
Research from Cybsafe, exploring staff’s changing behaviours in the wake of Generative AI, has found employees are sharing sensitive company information with AI tools they know not to divulge to friends in social settings outside the workplace. Over half (52 per cent) of UK office workers have entered work information into a generative AI tool, with 38 per cent admitting to sharing data they wouldn’t casually reveal to a friend in a pub. Revealing sensitive information to large language models can help threat actors gain access to company systems, breaching cybersecurity measures.
“The application of LLMs presents two major risks,” said Andrew Whaley, UK senior technical director at Promon. “One: generative AI has the potential to transform a specialised and costly skill into something accessible to anyone through automated ‘bots’.
“Secondly, and more alarmingly, there is the risk that these models could grasp the currently prevalent static code obfuscation techniques widely used in the industry. This understanding may lead to the development of a tool that removes obfuscation from protected applications, exposing their structure and rendering them susceptible to manipulation.
“To address this threat, it is vital to develop groundworks for innovative ‘dynamic’ obfuscations, which involve the protection of code mutation at runtime. This dynamic nature makes it impossible to comprehend the code in a static context. Implementing such dynamic obfuscation techniques is necessary to counteract the potential risks in cybersecurity associated with the misuse of generative AI.”
Analysing website security
While capable of many business advantages, in the wrong hands LLMs like ChatGPT can be used maliciously. One way that’s been discovered by security researchers is the possibility of revealing whether a website has vulnerabilities within its software.
Etay Maor, senior director of security strategy at Cato Networks, recalls: “I once vetted the source code of a website that I was on, and I asked ChatGPT, ‘is it vulnerable to anything?’. And it said no.
“Then, I went to a website that I know is vulnerable. I copied the source code and again asked ChatGPT whether this was vulnerable. Not only did the chatbot say yes, but it also revealed the vulnerability that can be exploited, and what you can do to breach it. So unfortunately, it can help in that sense, as well.”
Humans in the loop monitoring the data that large language models are trained on is, and will remain crucial towards long-term safety when using such technology.
Tech leader profile: learning from IoT errors to inform AI — Kirsty Paine, strategic advisor in technology and innovation for Splunk, warns of the bad cybersecurity mistakes you should be avoiding.
Constructing phishing emails
Phishing campaigns have seen technological disruption in the form of generative AI in recent times, with threat actors using LLMs to attempt entry into company networks in more effective-looking ways, over email, social media and other digital communication means. Darktrace recently warned of phishing scams emerging that utilised ChatGPT to aid delivery.
“LLMs can be very good at writing phishing emails and writing in different languages,” said Maor. “This was a little bit tougher for threat actors in the past, because if you wanted to launch an international phishing campaign, you had to buy services from another criminal who would do the translation services and other capabilities for you. Now, cyber criminals get to do this on the fly.”
Bias and misinformation
Biases can inadvertently make their way into the training of AI models, causing outputs to become rife with oversights. LLMs operate by learning from interactions, which while intuitive and potentially conducive to creativity, can also lead to generation of misinformation.
“The problem is not with technology, but with its usage and, specifically, with the data it’s trained on,” said Igor Baikalov, chief scientist at Semperis.
“Microsoft learned this lesson seven years ago, when it had to shut down its AI chatbot after less than 24 hours of interaction with – and learning from – the most active community on the Internet: extremists.
“OpenAI either didn’t get the memo – that public data from that vast swamp of misguided information called Internet is not safe to retransmit – or decided that any publicity is a good publicity and went ahead anyway. Censoring the output of such a prolific verbal generator is extremely inefficient and introduces another level of bias.
“The solution is to vet the data used for training, but it’s extremely hard considering the volume and breadth of topics. Crowdsourcing the process runs into the MS chatbot problem – now one has to vet the moderators.”
Why AI cannot be blamed for its bias — How you can ensure biases in AI models can be limited.
Benefits of LLMs for security teams
On the other hand, usage of large language models can bring an array of operational benefits to cybersecurity staff. This depends greatly, however, on careful management of model input.
“AI large language models (LLMs), machine learning (ML), and other deep learning techniques and behaviour-based approaches are today being applied to automate and scale the proactive operations to identify, detect, protect, and respond to cyber threats,” said Dr. Mesh Bolutiwi, director of cyber GRC at CyberCX UK.
“For example, security systems and solutions leveraging AI-based LLMs can analyse immense quantities of data, such as logs (e.g., network, system, application, and security logs, etc.), documents, user behaviour data, and systems activities, to identify threats, anomalies, and patterns indicative of cyber attacks.
“The ability of LLM-based solutions to support efforts to discover, assess, validate, remediate, monitor, and implement corresponding end-to-end risk response solutions to address current and future cyber threats holds great promise, potentially leading to more effective intrusion detection and prevention solutions for proactively identifying and blocking threats and malicious activities.”
Keeping it niche
Baikalov believes that usage within niche cybersecurity applications and use cases can drive value long-term, if properly trained on carefully curated data limited to specific areas.
He added: “LLMs need a large amount of data to cover the topic in depth, therefore products from the tech giants – like MS Security Copilot or Google PaLM2 – are likely to lead the way and provide pre-trained models for smaller developers to customise and incorporate into their applications.
“But LLMs are perhaps best used at the human-machine interface, with the real work being done using structured analytical models tuned to the customer’s environment.”
Detecting Internet threats
As mentioned earlier, tools like ChatGPT can be used by threat actors to find and exploit website vulnerabilities. On the flip side though, large language models are also being explored as an outlet for finding and fixing security flaws across the Internet.
“Whilst it’s common to see machine learning in products involved in alerting or analysis such SIEMs and EDR, the large language model explosion is still quite recent and something that the industry is starting to explore,” said Tom McVey, senior solution architect at Menlo Security.
“AI will be able to be used in a multitude of ways to detect and mitigate threats; some that we haven’t even conceived yet as it’s still early days. To detect malicious websites, a product that verifies whether any page was human or AI will be very powerful. Without this, the internet may become a bit like the Wild West – similar to its early days.
“Using AI to homologate and structure it again will help us to defend against the types of threats that leverage language models may generate.”
The Dark Web: a cyber crime bazaar where data is a hot commodity — Exploring how threat actors are initiating attacks via the dark web.
Battling AI-powered attacks
When it comes to keeping cyber attacks aided by artificial intelligence at bay, a proactive threat intelligence system is key. This can be achieved through a combination of diverse network behaviour data, and context being involved in processing such data.
Aron Brand, CTO of CTERA, commented: “During our recent launch of Ransom Protect at CTERA, I witnessed firsthand the immense potential of AI in combatting the ever-evolving tactics of ransomware. The success in this venture, from my perspective as someone deeply involved in its development, hinged on two main pillars.
“Firstly, the integrity and depth of our data proved essential. It’s not just about accumulating vast amounts of data but ensuring its diversity and richness. By exposing our machine learning algorithms to a wide spectrum of real-world attack behaviours and contrasting them with standard user activities, we were able to shape a solution adept at identifying both known and emerging threats.
“Secondly, the way this data was processed and represented was crucial. Feature-engineering, the process of transforming raw data into meaningful attributes, became the core of our approach. It was essential for our AI to not merely see data but to grasp the very nature of an attack, to understand the subtle differences between malicious and benign activities.”
Related:
16 cybersecurity predictions for 2024 — Check Point Research has revealed its top predictions for cybersecurity in 2024, covering topics including AI developments, ransomware and cyber insurance.