If you define AI as something that can emulate human decision-making, there’s a chance you’ll be disappointed when you find out how limited AI solutions for cyber security really are.
Speaking to Information Age ahead of his keynote speech at Custodian’s Talking Tech, April 25 2019, Etienne Greeff, CTO and founder of SecureData, admitted that he often rolls his eyes when he hears about AI solutions for cyber security.
He argued: “In cyber security and in application security, there’s actually no known application of AI. There’s no autonomous agent that automatically defines threats; that does not happen yet, and it’s not very close to happening.”
It appears some enterprises are challenging the hype too. Last year, the Financial Times published an article in which an engineer from a UK-based company claimed its Darktrace system regularly sent out false alerts that many IT staff just ignored — back then, the company was spending around $10,000 a month to use it. The engineer, who didn’t want to be named told the FT: “Half my team won’t look at it once during the day . . . I do think it’s very expensive, I’m not going to lie.”
But at the same time, according to Greeff, dismissing the potential of AI and its subset ML in cyber security outright might be like throwing the baby out with the bathwater. For him, enterprises really just need to manage expectations.
“Is AI really, really stupid?” On the limitations of AI
“AI and ML are just tools, and it’s how you use the tools that matter,” said Greeff. “There’s certainly a role for ML and AI in cyber security; for example, they are very good at dealing with lots of information and trying to understand what is normal and what’s anomalous.”
For Greeff, ML can also be used to automate responses to common vulnerabilities and remove some of the heavy lifting around time-consuming protocols.
While some AI/ML-based systems have already proved to be successful at tackling complicated tasks, be it playing chess or participating in debates, at the crux of Greeff’s argument is the view that AI and ML should be used to augment security staff.
Avoiding the hype
But if organisations want to implement AI and ML in their cyber security strategy, how can they avoid falling into a hype-trap?
Information Age suggests that enterprises explore vendors that have an expansive approach to accommodating diverse data sources for analytics.
Beyond this, they need to get someone on board who gets actually AI and ML, or, at least, partner with someone who does.
Enterprises should always be cautious about bold claims. If you hear something like ‘we automatically detect unknown attacks’ chances are its nonsense.
Perhaps most importantly, before acquiring any new solutions, define the particular problem that you’ve got and then figure out if ML or AI is the right way of solving the problem. There may even be a much better traditional way of solving the problem.
Why is artificial intelligence overshadowing RPA?
Greeff added: “Often in cyber security, we hunt for the complicated solutions but in the end, solutions are often terrifyingly simple.
“Sometimes vendors just get in the way; often the money being spent on shiny new solutions is money not spent on getting the fundamentals right.”
Ultimately, organisations need to spend time shaping the machine learning output with business context, which will ensure that the results are more meaningful and insightful. This requires analysts to spend time on the system and infuse it with their context and insights.
Custodian’s Talking Tech in association with Information Age will be held on Thursday, April 25th 2019, between 8.15am – 12.00pm, at Maidstone TV Studios, Kent. It’s free to attend and you can reserve your place here.