This article will explore the ways in which data protection can benefit from artificial intelligence, as cyber attacks continue to grow and evolve
As the attack surfaces of organisations has grown since lockdown took hold and employees migrated to remote devices, pre-pandemic data protection practices have since needed a revamp of methods. Increasingly, this has entailed the automation of processes, powered by technologies such as artificial intelligence (AI), which is capable of making data protection more efficient if implemented properly.
With this in mind, we take a look at some of the most valuable ways in which artificial intelligence lends itself towards data protection.
The future impact of artificial intelligence
Looking past the stigma
Because AI relies heavily on data algorithms, organisations may feel hesitant about using the technology to aid data protection practices. There is a fine balance to be struck between efficiency of threat mitigation and privacy of user data.
“There seems to be a stigma associated with AI and data protection. But, when implemented correctly, AI is very well placed to improve data protection across any industry,” said Stuart Hubbard, global AI services director at Zebra Technologies.
“If we take the use of AI within warehousing, an AI application can be built to process images or videos that require an action based on an input, for example, to alert someone if they’re near danger. When that’s the case, CCTV cameras wouldn’t need to track a person’s face; just monitor for potential danger within the warehouse. As such, if the system is designed to alert them, the video system would never need to store a person’s face, avoiding concerns around privacy and data protection.
“The rise of IoT devices and processing AI algorithms at the edge means that companies are using the data to make better business decisions in real time. All raw data outside of that will be discarded. This, alongside regulations like GDPR, is driving engineers and researchers to be more thoughtful about building efficient systems that can protect people’s privacy and drive a company’s bottom line.”
Security automation
One key use case for AI within data protection practices that organisations are adopting manifests itself in security automation — utilising technologies such as artificial intelligence, analytics, and automated orchestration. With the costs of data breaches continuing to rise, it pays to have such capabilities in place to mitigate any financial damage.
“A ‘security-first’ lifestyle is a must for a remote workforce. While we’re working from home offices, on the go, and across multiple devices, employees must be empowered with the right tools and information to protect their organisation’s data,” explained Rick Goud, co-founder and CIO of Zivver.
“Today, technology can do the heavy lifting for businesses, enabling people to focus on what they do best, safe in the knowledge that machine learning and pre-set rules are working in the background to protect their data.
“Smart email security technology is a great example of security automation in practice. Such solutions are designed to protect organisations from human error in real-time, instilling best practice, raising awareness, and preventing data breaches in outbound email.
“After all, it’s not the individual’s responsibility to be the ‘data protector’ of an organisation — they need to focus on their job, be able to make decisions in the moment and have the confidence that the sensitive information they are sharing is always secure. Empowering staff to be able to send sensitive information securely is the step forward required to avoid data breaches, and the damaging consequences that can occur.”
Network from home: how data privacy and security responsibilities must be shared
Automating threat evaluation
AI can also lend itself towards automating the evaluation of threats to the network, as Jon Southby, senior cloud consultant at HeleCloud, discusses: “As more and more organisations move towards hybrid workflows, the attack surface expands, and the threats posed to the heart of a business only multiply and increase in severity. Thankfully, by using AI, organisations can automate the process of evaluating threats to their corporate data.
“AI can sift through vast amounts of data, quickly identify any abnormal activity as it happens, allowing for a proactive approach to be deployed and the appropriate countermeasures can be applied. What’s more, this occurs in real-time as opposed to a reliance on malware databases which can be slow to update and disseminate.”
Signalling breaches
The automation that artificial intelligence brings can greatly reduce strain on security staff, by notifying the company of behavioural changes that show signs of a breach.
“AI can signal not just breaches in data protection but also breaches related to the content within that data,” said Nick Atkin, head of solution architecture at Dubber Technology.
“It is one thing to know a recording of a customer call was inappropriately shared – it’s another to know it contained inappropriate content. What is often ignored is the ability for AI to signal data breaches based on behavioural data – for instance, a rapid decline in calls might signal a shift to unauthorised messaging applications.”
AI as a black hat
Additionally, AI is capable of finding potential data vulnerabilities and amend them before an attack can take advantage.
“It’s unusual to think that AI could lend itself towards data protection. After all, more often than not, AI is in tension with data protection – especially in applications like facial recognition and surveillance, or when AI is used to sway buyer behaviour,” said Harvey Lewis, associate partner at EY.
“However, AI can be used in a ‘black hat’ role, where the idea is to tease out potential issues and correct them before implementation – much like the way that ethical hackers test the defences of websites and applications before they become open to the public.
“Alternatively, modern generative neural networks – GNNs – can be used to create lifelike faces, biometric information and many other forms of artificial data. When AI is trained on these simulations, the need to gather real personal data is diminished.”
Using AI to fight money laundering
Complying with regulation
Ultimately though, for AI to truly provide value to a data protection strategy, organisations must ensure that regulations are adhered to, as well as considering ethical matters associated with the technology.
Peter van der Putten, director of decisioning & AI at Pegasystems and assistant professor in AI at Leiden University, explained: “I wouldn’t necessarily state that AI helps data protection. Just like any other form of processing personal data, AI will have to comply with privacy regulation.
“But it doesn’t stop there. In AI ethics, other principles play a key role as well, such as accountability, fairness, transparency and explanation and robustness.
“Ultimately, the key aspect is the actual goal of the AI system – is it just benefitting the company or also the customer or citizen, and what is its risk of doing harm? More risky AI systems will be under greater scrutiny, and AI regulations are being proposed that are very similar to data privacy regulations introduced over recent years, including fines for up to defined percentages of revenue.”