Instances of reported cybercrime are growing astronomically – and yet many successful attacks are still not reported, or even detected.
In response to the escalating threat, detection capabilities are constantly being refined, improved and almost fully re-imagined.
As new threats arise, so do technologies that offer a control against these threats. Automating the process without compromising the accuracy or effectiveness of the measures helps to augment the role of a human in security operations.
The automation wave is the progression of technology and machine learning into intelligent software that can act to both identify and remediate incidents, leaving security professionals to tackle more complex and relevant issues.
The role of a security professional is made more arduous by the abundance of malware, botnets and distributed denial-of-service (DDoS) products that are sold on the underground market and which empower organised crime syndicates.
>See also: Beyond chatbots: how AI will help fight cybercrime in the IoT
The evolving difficulties associated with identifying and managing insider threat, device policy and management, and the uncertainties surrounding the increasingly connected IoT, further complicate the role of a security professional and the task of protecting the business internally and externally.
Add to this conundrum the people and skill-power required, and organisations frequently fall short of the required number to adequately combat the full spectrum of threats and fail to successfully recruit because of the ever-increasing skills gap.
The landscape is challenging and security measures must evolve. As motivated criminals refine their methods, so too must the security team protecting their new hybrid networks and the data associated with their critical assets.
Traditional rule-based systems were effective against less sophisticated attacks of the past, but in today’s digital world these traditional controls can only help to overcome challenges to a limited degree.
As the description suggests, rule-based systems are binary by permission and usually rely on a static set of rules.
An initiative from the Defense Advanced Research Projects Agency (DARPA) and the Cyber Grand Challenge seeks to automate this process, fielding a generation of machines with algorithms that can discover, prove and fix software flaws in real-time without human intervention.
The success and impact of the challenge will further highlight that the speed of autonomy will, in the very near future, disrupt the current advantages a motivated criminal has available.
One measure taken to mitigate risk in a situation where a company faces a multitude of simultaneous attacks is risk scoring. Essentially, a composite score is given to each threat based on a series of contextual factors, which correlate with priority areas for the organisation.
Fundamentally, risk scoring enables organisations to prioritise which of the incidents need to be prioritised first to minimise the impact on the business.
With increasing numbers of threats both identified and as yet undiscovered, risk scoring will continue to provide guidance and reassurance to businesses.
However, as important as reactionary methods are, there is no doubt that the only way to truly tackle cybercrime is to become more proactive and, universally, that is what the security industry has been concentrating on.
Graduating from a traditional rule-based system, experts have employed machine-learning techniques, drawing on data insight to identify patterns and apply machine-readable context to events.
It is technology that is used by many businesses to analyse big data sets. For example, Amazon has deployed a machine learning solution, based on a unique algorithm that can predict customer-spending habits.
As this proves invaluable to them, every organisation will soon be using similar services to grow their own profit. In the security world, machine-learning systems use anomaly detection. This means that a model is defined and positioned as normal. If an outlier is detected that differs from this model, it is considered to be an anomaly.
Importantly, the ‘normal’ model is not static. As more data is added, the definition of ‘normal behaviour’ continues to evolve and reflect the environment to ensure it remains accurate and up-to-date.
This also means that if unknown threats are assessed against the model, they are more likely to be identified because they do not fit the expected behavioural pattern.
As all tactics have evolved in the digital era, deep learning has enhanced machine learning. This protection identifies a trigger, event and consequence based on the datasets of many millions of malicious and legitimate files that have been fed into the deep learning core engine.
Using this knowledge, the technology will make accurate assumptions enabling the detection and classification of any malicious file type, including morphic variations of known and unknown malware.
Automatic detection and blocking are almost simultaneous bringing real-time protection to businesses. In security, deep learning has shown ground-breaking results when compared to classical machine learning in detecting new malware, on any device, platform and operating system.
Information security professionals have battled for years to gain better insight into threat behaviour and utilising the most up-to-date technology to protect against attacks. The most recent iterations utilise narrow artificial intelligence (AI) to help guide critical decision-making and take action based on the outcome.
Artificial neurons can easily outstrip the speed of biological neurons, allowing for accelerated decision making which is more agile than when just involving a human counterpart.
AI alleviates the amount of time spent on false alerts and notifications so dwell time is reduced and business critical data is more effectively protected. In the future, AI is expected to ease the taskforce’s workload, allowing more time to concentrate on complex issues that require manual intervention or cognitive problem solving.
At the moment, narrow AI – a concentrated form of AI that focuses on specific problems within a well-defined context – is in early stages for simple tasks in security operations.
>See also: How artificial intelligence will impact the role of security pros
It still has limitations as this type of AI is unable to improvise and acts only in a limited number of pre-determined ways. However, more advanced AI that can identify patterns, perform automated triage and take action against adversaries is already available in limited form.
The next 12 months is a crucial period where we expect to see further integration of machine learning and deep learning techniques in organisations and technologies.
However, AI will perhaps remain out of reach for many due to the immense volume of data, trained models and the large processing capacity still required.
A hybrid approach to security operations combining automation and humans, or supervised machine learning, is not only critical in alleviating the current skills shortage in the information security and cyber security industry, but also provides significantly improved results over either a human or machine working alone.
The automation wave will accelerate in terms of maturity over the next few years.
The demand for automated, fast-paced decision making is already accepted in most organisations today as a method to substantiate effectiveness and growth.
The next logical step is for the technology to be applied to the information security industry pervasively.
As existential risk continues to be identified, managed and mitigated, so will the methods used be matured. Automated threat and risk management is perhaps closer than most realise.
Sourced from Neil Thacker, deputy CISO, Forcepoint