AI is advancing at breakneck speed, but its security measures are struggling to keep pace. The recent cyberattack on DeepSeek, an emerging open-source AI platform, is a stark reminder that AI systems – especially agentic AI – are vulnerable to the same threats plaguing the broader digital ecosystem.
DeepSeek, often dubbed the “ChatGPT Killer” for its advanced language models, briefly surged past OpenAI’s ChatGPT in popularity. But just as quickly, it became a target. A distributed denial-of-service (DDoS) attack forced the company to halt new user registrations, shaking confidence in AI’s security readiness.
This wasn’t an isolated incident. AI models, whether open-source or proprietary, are increasingly being exploited, whether through adversarial machine learning attacks, data poisoning, or API breaches. As AI agents take on more critical business functions, these risks will only grow.
To address this, the AI industry must rethink its approach to security. A fundamental piece of the solution is an “AI kill switch” – a mechanism to halt compromised AI systems before they spiral out of control. And at the heart of this kill switch is machine identity security.
The double-edged sword of AI
AI has become a driving force in digital transformation, delivering breakthroughs across industries such as healthcare, finance, logistics and marketing. Now, with agentic AI, large language models (LLMs) are evolving beyond text generation to take autonomous actions. But all this dramatic progress comes with significant risk. Just as AI speeds up innovation, so too does it supercharge the capabilities of cybercriminals.
In the case of DeepSeek, several researchers have already uncovered vulnerabilities in its systems, including jailbreaks that facilitate malicious outputs like ransomware and even toxin development instructions. Meanwhile, on Jan. 29, another research team discovered an exposed ClickHouse database leaking sensitive data, including user chat history, log streams, API secrets and operational details. They also found that this granted complete control of the database, as well as privilege escalation – without authentication.
These are just a few examples of what NIST categorises as Adversarial Machine Learning (AML) attacks, which exploit weaknesses in AI models at various stages of their lifecycle, from development and testing to deployment.
CyberArk’s 2024 research showed that apprehensions about the tech are shared by security leaders; 92 per cent of them have concerns about the use of AI-generated code, 77 per cent worry about data poisoning, where attackers manipulate training data to skew AI outputs, and 75 per cent are deeply concerned about AI model theft.
These apprehensions aren’t going away, given the astounding level of attention that the DeepSeek attacks received. Because as this case showed, effective attacks don’t just disrupt operations. They have potential global ramifications.
What is an AI kill switch and why do we need it?
Whether it’s an attacker corrupting or stealing a model, a cybercriminal impersonating AI to gain unauthorised access, or a new form of attack we haven’t yet seen, security teams need to be proactive. This is why an AI “kill switch” is critical. It enables organisations to pause, contain, or disable compromised AI systems before they cause serious damage.
Far from a simplistic shutdown mechanism, an AI kill switch is about enforcing control through machine identity security. By verifying and managing the unique identities of AI models during training, deployment, and operation, organisations can prevent unauthorised access, cut off compromised systems, and stop threats from escalating across networks. Without this, AI systems will remain vulnerable to exploitation at scale.
The missing puzzle piece: machine identity security
How do you develop your “AI kill switch?” The answer lies in protecting securing the entire machine-driven ecosystem that AI depends on. Machine identities, such as digital certificates, access tokens and API keys – authenticate and authorise AI functions and their abilities to interact with and access data sources. Simply put, LLMs and AI systems are built on code, and like any code, they need constant verification to prevent unauthorised access or rogue behaviour.
If attackers breach these identities, AI systems can become tools for cybercriminals, capable of generating ransomware, scaling phishing campaigns and sowing general chaos. Machine identity security ensures AI remains trustworthy, even as they scale to interact with complex networks and user bases – tasks that can and will be done autonomously via AI agents.
Without strong governance and oversight, companies risk losing visibility into their AI systems, leaving them vulnerable. Attackers can exploit weak security measures, using tactics like data poisoning and backdoor infiltration – threats that are evolving faster than many organisations realise.
The DeepSeek attack is a wake-up call for AI and cybersecurity. Companies, governments, and researchers must move beyond reactive responses and develop security frameworks that evolve as quickly as AI-powered threats.
Machine identity security is a critical first step – it establishes trust and resilience in an AI-driven world. This becomes even more urgent as agentic AI takes on autonomous decision-making roles across industries.
The economic, technological, and security risks are too significant to ignore. Without the right safeguards, that are as strong as the technology itself, AI’s potential for progress can just as easily become a liability.
David Higgins is senior director, field technology office at CyberArk.
Read more
AI vs AI – are cybercriminals or organisations winning? – Cybercriminals are using LLMs to enhance their attacks, making it harder for security professionals to even know they’re under attack. However, security teams and researchers are using GenAI to make themselves smarter and faster at finding security flaws at scale
Why trust is the key to AI adoption for UK workers – If 2024 was the year that we all started to get to grips with using AI tools for everything from generating recipes to helping us formulate a work presentation, then it’s reasonable to assume that 2025 will be the year that AI goes mainstream in the workplace
10 useful ChatGPT prompts for data scientists – As a data scientist, what should you be asking to get the most out of ChatGPT? Jobbio’s Aoibhinn McBride explains more