Alert fatigue has long been a major issue for security analysts tasked with manning the Security Operations Center (SOC).
False positives account for a fifth of all alerts in the SOC, according to a recent report, and can seriously deplete resources, impacting the efficiency and morale of the security team. The result – alert fatigue – can lead to frustration, miscommunication, and burnout, and bona fide incidents may be missed or deprioritised. Consequently, the holy grail has become finding a solution to false positives to transform security operations.
Automated solutions such as Security Orchestration Automation and Response (SOAR) were touted as the solution but have largely failed to deliver. These technologies struggle to fuse together the signals from disparate security tools that indicate an attack is underway and to automate investigations efficiently. Breaches continue to occur, with over 70 per cent of medium-sized businesses and 74 per cent of large businesses reporting attacks over the course of the past year according to the UK government’s Cyber Security Breaches Survey 2024.
Detection failures
At the same time, we’re seeing threats that don’t trigger alerts. Modern attacks can propagate through organisations by using the features of the operating systems. Living off the Land Binaries and Scripts (LOLBAS), which don’t introduce any code into the system and can pivot within the network without the need to use any tools, are notoriously difficult to detect.
Moreover, because SOC teams work linearly, ticket by ticket, this can prevent the analyst from seeing events in a wider context. Even if they do catch something, there’s little chance of them connecting the incident to an earlier one, which means the risk of missing something that indicates an escalating attack is considerable.
To address these issues, we need to take a new approach to detection in the SOC that takes the pressure off the analyst and looks at alerts not consecutively but as part of a wider pattern of events. The human brain would struggle to do that, but AI now allows us to address this type of problem by taking a whole new approach to the problem.
AI, but not as you know it
Make no mistake, we are not talking about the application of AI in the usual sense when it comes to threat detection. Up until now, AI has seen Large Language Models (LLMs) used to do little more than summarise findings for reporting purposes in incident response. Instead, we are referring to the application of AI in its truer and broader sense, i.e. via machine learning, agents, graphs, hypergraphs and other approaches – and these promise to make detection both more precise and intelligible.
Hypergraphs gives us the power to connect hundreds of observations together to form likely chains of events. This can be achieved by scoring the observations as well as the chains of observations, with scoring determined by heuristics such as how often we see a detection from a specific workstation, for example. Disparate detections that might be related in some way will often share an observable, such as a user, transaction-id, or cyber threat intelligence that points to the same malware group. These can be collated by hypergraphs that use various parameters to combine this information, visually present it and analyse and correlate it. Mapping detections to the ATT&CK framework can then gauge the progression of the attack in terms of how far along the attacker has moved along the kill chain.
Augmenting the analyst
The end result is that the security analyst is no longer perpetually caught in firefighting mode. Rather than having to respond to hundreds of alerts a day, the analyst can use the hypergraphs and AI to detect and string together long chains of alerts that share commonalities and in so doing gain a complete picture of the threat. Realistically, it’s expected that adopting such an approach should see alert volumes decline by up to 90 per cent.
But it doesn’t end there. By applying machine learning to the chains of events it will be possible to prioritise response, identifying which threats require immediate triage. Generative AI can then offer the analyst a number of courses of action to remediate the threat, ensuring incident response is consistent and proportionate.
Using generative AI in combination with hypergraph analysis will significantly reduce false positive rates and combat sophisticated threats by augmenting the analyst. But it’s a brave step that requires the CISO to recognise firstly that their SOC is not working and secondly that AI is not limited to LLMs. If CISOs can look beyond the marketing hype and apply a little imagination, they’ll realise that AI can solve many of the security problems they’ve been grappling with over the past few years; they’ve simply got to ask the questions that were previously unanswerable.
Kennet Harpsøe is the lead security engineer at Logpoint.
Read more
How to Build a Security Operations Center (on a Budget) – Get All 5 Chapters of AlienVault’s How to Build a Security Operations Center (On a Budget) in 1 eBook!