Artificial intelligence (AI) and its subset machine learning are being hailed by experts as a means to fight cyber-attacks. Currently, the technology can flag anomalies for a security analyst to look into, saving time and cutting overall businesses costs.
AI and machine learning are developing quickly, with experts suggesting they could be applied to several specific use cases within cyber security. Indeed, it’s hoped that in the future, intelligent systems including these technologies will be able to accurately detect and remediate attacks in real-time.
But despite its potential, the technology also faces challenges. For example, the ability to save time must be weighed up with the chance of false positives. At the same time, adversaries are starting to use AI to attack businesses and even hide their tracks.
At the moment AI – or more specifically, machine learning – is mostly used for anomaly detection, says Etienne Greeff, CTO and co-founder, SecureData. He says the most useful systems are those that solve “specific problems”.
Anomaly detection: Machine learning platforms for real-time decision making
For example, determining personally identifiable information (PII) within huge amounts of data is a good use for AI in cyber security, says Joan Pepin, CISO and VP of operations at Auth0. This is especially important in regions such as Europe where the EU update to General Data Protection Regulation (GDPR) is “mandating new levels of data governance”, she says.
Among the benefits, the technology allows firms to come to conclusions swiftly, says Bridget Kenyon, global CISO at Thales eSecurity.: “AI systems can be set to solve difficult problems when humans don’t know the answer, but they know what the answer should look like.”
Prepare for the new royal wedding of IT: AI and cyber security
Learning from breach data
If deployed correctly, AI can collect intelligence about new threats, attempted attacks and successful breaches – and learn from it all, says Dan Panesar, VP EMEA, Certes Networks. “AI technology has the ability to pick up abnormalities within an organisation’s network and flag them more quickly than a member of the cyber security or IT team could,” he says.
Indeed, current iterations of machine learning have proven to be more effective at finding correlations in large data sets than human analysts, says Sam Curry, chief security officer at Cybereason. “This gives companies an improved ability to block malicious behaviour and reduce the dwell time of active intrusions.”
It is true that AI increases efficiency, but the technology isn’t intended to completely replace human security analysts. “It’s not to say we are replacing people – we are augmenting them,” says Neill Hart, head of productivity and programs at CSI.
However, AI and machine learning also have a dark side: the technology is also being harnessed by criminals. It would be short-sighted to think that the technological advancements offered by AI will provide a complete barrier against the fallout of cyber-attacks, says Helen Davenport, director, Gowling WLG.
“AI techniques may necessitate the use of centralised servers collating large amounts of user data together– making those repositories potentially a ‘one-stop-shop’ for hackers looking to steal multiple sets of information,” she warns.
“Traditionally, if you wanted to break into a business, it was a manual and labour-intensive process,” says Max Heinemeyer, director of threat hunting at Darktrace. “But AI enables the bad guys to perpetrate advanced cyber-attacks, en masse, at the click of a button. We have seen the first stages of this over the last year: advanced malware that adapts its behaviour to remain undetected.”
AI could be configured by hackers to learn firms’ specific defences and tools – and this could give way to larger and more successful data breaches, adds Panesar. “Viruses could be created to host this type of AI, producing more malware that can bypass even advanced security implementations.”
AI in cyber security: a help or a hindrance?
AI accuracy
At the same time, there are concerns about the accuracy of AI and machine learning: If the technology gets something wrong – or if people misinterpret a valid security alert – it can actually decrease business efficiency. False positives can be incredibly damaging to a security team, says Simon Whitburn, SVP cyber security services at Nominet. “Hackers, hostile nations, and wannabes are constantly trying to overwhelm cyber defences and false positives can distract security teams from these threats and increase complacency.”
He cites the example of US company Target. “When they were hit with a huge data breach in 2013 – exposing 40 million credit and debit card details to hackers as well as the personal information of another 70 million customers – people saw them as the victim. But the truth was more complex.
“After an investigation, it was revealed that they had been alerted to the early signs of a data breach by their security systems – but they determined it did not warrant immediate action and they, in essence, waved the hackers through the door.”
Meanwhile, the data set informing the AI needs to be of optimum quality. Curry warns: “Ultimately, any artificial intelligence and machine learning implementation is only as good as the training data set used. This creates detection and data bias before it is ever deployed onto a live system.”
So it’s clear the technology has multiple uses, but it also needs to be approached with caution. Taking this into account, technology leaders must have a comprehensive plan before the first deployment, says Mark Vargo, CTO CSI. “We have seen people buy part solutions because a vendor came in and had a convincing argument about part of the problem. It’s much better to look at the whole process rather than one thing at a time.”
In addition, Greeff advises: “Ask the vendor – what problems does your solution solve? If they can’t answer this by for example, saying, ‘this solution allows you to look for anomalous traffic’, then you need to be quite careful.”
As cyber-attacks grow in scale and sophistication, AI and machine learning might go some way in helping to keep up with cyber criminals. However, Kenyon warns there could eventually be “an arms race”, with one computer protecting the environment and another attacking it.
“It could mean the network is changing rapidly and no one knows what’s going on. You need some kind of human mediation or a predetermined limit that you don’t allow the automated response to transcend.”