As businesses explore the benefits of using artificial intelligence (AI) and machine learning (ML)-assisted computing tools, we continue to see rapid interest and adoption of the technology – especially in the enterprise. Most conversations today revolve around ChatGPT, an AI/ML-based tool for generating text from inputs or prompts. ChatGPT has already been used as the basis for augmenting existing capabilities in customer-facing products, such as Microsoft’s integration of ChatGPT into their Bing search engine to improve search result quality.
Individuals have also pounced on the accessibility of ChatGPT to rapidly produce content that increases their productivity, while decreasing cost. At the same time, the spotlight on AI has raised cybersecurity concerns with the European Data Protection Board (EDPB), A European regulatory body overseeing data protection rules, who recently launched a dedicated taskforce to regulate the use of ChatGPT. This directly follows Italy’s outright ban on the technology and concerns from several other EU countries around the use of ChatGPT.
Related:
ChatGPT vs GDPR – what AI chatbots mean for data privacy — While OpenAI’s ChatGPT is taking the large language model space by storm, there is much to consider when it comes to data privacy
How to embrace generative AI in your enterprise — What are the use cases for embedding generative AI in your enterprise? How can it help ease burden of repetitive admin? What are its limitations? Find out here.
One challenge facing task forces like this is that AI/ML brings technology closer to the edge of what our normal human senses can detect. This plays on our most intuitively trusted senses of sight and hearing, making it harder than ever to detect. AI tools, such as ChatGPT, may lower the bar for adversaries to successfully execute cyber attacks using convincing text, video and audio for use in phishing and ransomware attacks.
Cyber criminals don’t always need to rely on sophisticated methods of attack in order to compromise a target environment. In fact, adversaries often take the path of least resistance by employing tactics that cost the least possible amount of time and money.
In today’s business world of geographically distributed workforces, AI tools enable increasingly credible methods to execute successful social engineering attacks. Controls that were once difficult to circumvent, such as voice verification for identity on a password reset, will become obsolete. While ChatGPT by itself may not (yet) produce convincing spear phishing outputs, it could be used to improve base level quality issues with most phishing campaigns such as addressing poor grammar and obviously inaccurate information.
The threat of account takeover through phishing and social engineering is compounded when combined with the prevalence of outdated authentication methods. In fact, according to recent research from Yubico, 59 per cent of employees still rely on usernames and passwords to authenticate into their online accounts, while more than half of employees admit to writing down or sharing a password, increasing the risk of account takeover.
How can organisations bolster their defences?
As adoption of AI/ML-backed tools continues to grow, it will be important to focus on key ways to mitigate the risks associated with their use. When the efficacy of identity measures that companies have trusted for decades such as voice verification and video verification erodes, strongly linked electronic identity is even more important.
Phishing-resistant credential solutions such as security keys — that are hardware-backed and purpose-built around cryptographic principles — excel in these scenarios. Security keys that support FIDO2 also ensure that these credentials are tied to a specific relying party. This binding prevents attackers from preying on simple human error — for example, our human inability to spot a 0 (zero) versus a O (capital o) in a nefarious website URI.
With security keys, credentials are securely stored in hardware which prevents those credentials from being transferred to another system without the user’s knowledge or by accident. The use of FIDO2 authenticators also greatly reduces the efficacy of social engineering through phishing as users cannot be tricked into vending a one-time password to an attacker, or have SMS authentication codes stolen directly through a SIM swapping attack.
As technology continues to evolve and tools such as ChatGPT change the working world, companies must ensure they are responding by implementing phishing-resistant multi-factor authentication (MFA) in order to protect critical data and assets. Additionally, this must be supported by ongoing security education in the workforce to bolster defences and put businesses in the best possible position for success when it comes to mitigating emerging cyber threats.
Ben Eichorst is principal security engineer at hardware authentication vendor, Yubico.