Tech predictions 2025 – what could be in store next year?

Leading experts join us to share what they think is going to happen in the tech space in 2025 – do you agree with their predictions?

This year has been quite something in the tech space, with a rapidly evolving landscape and organisations scrambling to keep up. But what’s to come in 2025? Some industry experts share their predictions.

AI

Clean data will play a critical role in utilising AI to its fullest potential – Nitesh Bansal, CEO of R Systems

Good, clean data is imperative for enterprises to leverage AI to its fullest potential. In all cases, AI is inherently affected by the quality of data used. For example, if a company is leveraging AI in a chatbot feature, they must consider what data is being used to train the generative AI model and ask critical questions. Where did the model obtain its data? What kind of data is included? Has the data been evaluated and vetted to ensure its accuracy? Poor quality, inaccurate, or incomplete data can cause multiple issues in AI training and output, ultimately negating the benefits that the AI was initially meant to create.

Organisations also need to differentiate themselves. It is no longer sufficient to simply add features and functionality to applications – organisations must leverage all their data to create the strategic advantage they need – both internally and externally – through predictive analytics, personalisation, and end-user customisation.

o1 will disrupt the market, but full realisation might not come until 2026 – Zuzanna Stamirowska, CEO and Co-founder of Pathway  

“The rise of OpenAI’s o1, with its enhanced reasoning, mathematics and fact-checking capabilities, is going to disrupt the world of AI. It will fulfil the demand for a model with the capacity to think deeply and solve advanced problems. This will open new areas of applications and disrupt the AI market. I predict that the full extent of the shake-up o1 is going to create won’t be realised next year, although there will be an exciting race as other players in the space compete to keep up with the technology. That being said, o1, with its slow outputs and cost barriers, isn’t the end game for enhanced reasoning for LLMs. Many organisations are still looking at how they can close the gap in their own data while maintaining data management and privacy standards. Overcoming this is something I anticipate we’ll see more research into next year, but again, we might see the bigger outcomes coming in 2026.”  

AI buzzword bingo – Dan Lattimer, Area VP, Semperis

Artificial Intelligence (AI) will keep being talked about in 2025. However, a lot of it is buzzword bingo as the technology is not necessarily being used in a meaningful way – yet. While we are seeing cybercriminals increasingly trying to harness AI, many of those attacks will still be basic and clunky. And sadly, with everyone talking about AI, there is a risk that some of its really exciting applications will get lost in the general noise. 

2024 conventional wisdom: mastering AI requires mastering prompt engineering. 2025: never mind – Patrick Smith, Field CTO, EMEA, Pure Storage 

 “Only yesterday we were told that if would-be AI adepts want to wring anything useful from the technology, they’ll have to learn complex prompt design. This myth has permeated to the masses, with 57% of workers saying they’d like to use AI more, but they need training to create effective prompts. We’ll soon see how wrong that is as such barriers are stripped away. Increasingly, generative AI delivers value as a well-controlled, intuitive, naturalistic element of a user-friendly application – in fact, if any given application doesn’t already have AI embedded, you better believe it will soon. The notion that using AI takes specialised skills will quickly fade.

Tech’s next biggest trend, ‘Agentic AI’, is the self-driving car of Large Language Models – Steve Salvin, CEO and Founder of leading AI data insights specialists, Aiimi

The Nvidia and Gartner-backed AI system uses context and feedback from its environment to inform complex decisions and complete tasks with minimal supervision. The opportunities to drive efficiencies are clear. But so are the risks of having nobody behind the steering wheel. 

Companies looking to leverage Agentic AI safely and successfully in 2025 must therefore have certain guardrails in place. By using explainable models, teams can put routine checks and balances in place. And by using the high-quality data AI needs to generate high-quality results in the first place, teams can ensure that minimal intervention is needed. 

Since few companies have their data in order right now, few companies are ready to adopt Agentic AI or the hotly anticipated GPT-5. Automated data governance and information retrieval tools will therefore become essential as companies vie to establish accurate, relevant, and secure information before safely reaping the rewards of new AI tools.”

A Fortune 500 leader will deliver a high-profile keynote containing uncorroborated passages produced by generative AI – and regret it – Patrick Smith, Field CTO, EMEA, Pure Storage 

 Someone somewhere is going to embarrass themselves in 2025 – probably someone we’ve all heard of, too – by inadvertently relaying plagiarised or inaccurate AI-generated content. Likely, content sent up to the executive suite, unproofed, by a comms team that ought to know better.”

AI Avatars – Sam Liang, CEO Otter.ai

By the end of 2025, at least 20 per cent of C-level executives will regularly use AI avatars to attend routine meetings on their behalf, allowing them to focus on more strategic tasks while still maintaining a presence and making decisions through their digital counterparts.

Generative AI chatbots will cause high-profile data breaches in 2025 – Sohrob Kazerounian, distinguished AI researcher at Vectra AI

 In 2025, we will hear of numerous cases where threat actors trick a corporate Gen AI solution into giving up sensitive information and causing high-profile data breaches. Many enterprises are using Gen AI to build customer-facing chatbots, in order to aid everything from bookings to customer service. Indeed, in order to be useful, LLMs must ultimately be granted access to information and systems in order to answer questions and take actions that a human would otherwise have been tasked with. As with any new technology, we will witness numerous corporations grant LLMs access to huge amounts of potentially sensitive data, without appropriate security considerations. 

To protect themselves from threat actors trying to outwit Gen AI tools, organisations must put guardrails in place, finding a balance that leaves companies secure while ensuring LLMs can access useful and actionable data. This means creating robust protections to stop LLMs giving up sensitive information and setting up ways to detect if a threat actor is probing an LLM, so the conversation is shut down before it’s too late.

We will see the first instances of humanoids and humans working together, forcing companies to reimagine workplace dynamics – Liz Centoni, Chief Customer Experience Officer, Cisco

The future of work won’t be a binary choice between humans or machines.  It will be an “and.” AI-powered humanoids will form a part of the future workforce, and we will likely see the first instance happen next year. This will force companies to completely reimagine their workplace dynamics – and the technology that powers them. For example, companies will need to ensure their connectivity has the right levels of latency and throughput, because the humanoids’ performance will be driven by their ability to process and analyse data in real time. 

At the same time, organisations must ensure their security postures keep pace. Not only to ensure the data being processed by humanoids (and humans!) is kept safe, but also to keep the humanoids safeguarded from hacking and threatening tweaks to their software and commands. And all while keeping the transparency required in a hybrid work environment where humans and machines are pursuing common goals together. 

This human and machine collaboration will be inspiring and allow organisations to greatly scale operations but will also likely trigger concerns about AI replacing jobs. Leaders will need to be clear and uncompromising about harnessing AI’s power without losing the human touch that defines world-class customer experiences.

Cybersecurity

Better customisation of phishing campaigns – Austin Berglas, Global Head of Professional Services, BlueVoyant

As AI and deepfake technologies advance, phishing campaigns are expected to become increasingly sophisticated and challenging to detect. Cyber criminals are leveraging AI to craft highly personalised phishing emails that mimic legitimate communications, utilising data harvested from social media and other online activities to tailor their messages to individual targets. Deepfake technology, which enables the creation of hyper-realistic audio and video content, further increases this threat by allowing attackers to impersonate trusted individuals with eye-opening accuracy.

This technology could result in convincing spear-phishing attacks where victims receive seemingly authentic video or audio messages from colleagues or superiors, prompting them to give up sensitive information or authorise fraudulent transactions. The growing complexity of these phishing campaigns necessitates heightened awareness and advanced security measures, such as AI-driven detection systems and comprehensive employee training programs, to safeguard against increasingly deceptive threats.

In 2025, a flood of multi-surface attacks will shine a light on connected business operations, Christian Borst, EMEA CTO at Vectra AI

“With every enterprise employee using an average of 20 cloud and SaaS apps every month, critical gaps have formed between on-premises and cloud services that attackers will exploit in 2025, leading to a wave of high-profile data breaches.

“Whether it’s through bypassing multi-factor authentication, cloud account hijacking, living-off-the-land, or even exploiting zero days – threat actors will find creative new ways to infiltrate enterprises, escalate privileges, move laterally between systems, and progress their attacks.

“To protect themselves in 2025, enterprises need to eliminate security blind spots and understand their exposure to multi-surface attacks. This means looking at extended detection and response (XDR) tools and AI to increase visibility into operational environments, and understanding their exposure to attacks – including third-party services and suppliers 

The Great Deepfake Hiring Heist: Organisations fall prey to a mass synthetic identity attack – Andrew Bud, founder and CEO of iProov 

Remember earlier this year when KnowBe4 fell victim to a remote deepfake hiring scam using a synthetic identity? In 2025, a far larger synthetic identity operation will infiltrate organisations worldwide. A state adversary will combine deepfakes with fabricated credentials to create entirely new, convincing employee personas, bypassing security to gain access, steal data, and cause operational chaos with significant financial losses. This sophisticated scheme will exploit remote onboarding processes, manipulate employees, and even infiltrate payroll systems to divert funds and disrupt livelihoods. This incident will cause organisations to change how they approach identity verification and cybersecurity in the age of increasingly sophisticated synthetic identities.   

Crypto and blockchain

Crypto and blockchain – James Bergin, EGM Technology Research and Strategy at Xero

As the hype dies down, we are seeing the wider adoption of smart contracts, CBDCs and stablecoins. In the future, the concept of verifiable ownership of non-replicable digital assets could become the norm for small businesses. They could securely own, manage and sell digital assets such as intellectual property or supply chain data. The technologies could protect against counterfeit goods.

C-suite

Mounting pressure on CISOs will turn the position into a revolving door – Steve Cobb, CISO of SecurityScorecard

 In 2025, the pressure on security leaders will intensify as companies continue to hold CISOs personally liable for breaches, using them as convenient scapegoats to deflect blame from organisational failings. These high stakes will lead to a sharp decline in interest from seasoned security professionals

But here’s the catch: as breaches become more frequent and public scrutiny heightens, CISOs are often hindered by organisational structures that limit their direct access to the C-suite and boards. This lack of support and communication undermines their ability to drive meaningful change. Companies that fail to adapt by empowering their CISOs with greater authority and resources will find themselves scrambling to replace key leaders and more vulnerable to critical cyber threats. 

We’ll see the emergence of a new, immediately vital C-suite position: CAO – Nitesh Bansal, CEO of R Systems

“Chief AI Officer, that is. For most of this century CIOs, CISOs, and other tech-domain leaders have fought for seats at boardroom tables and a voice in setting corporate priorities; their struggles are one reason cybersecurity took so long to advance to a tier-one corporate concern. As AI pervades every corner of the average, thoughtful organisation, bringing certain unknowns and risks as well as massive potential upside, this mistake won’t be repeated.

What will a CAO do all day? Ideally, exercise broad oversight, from technical to ethical, as AI iterations pervade every key business process from product design to building security to writing press releases. Spoiler: it’s not going to be appropriate for everything, and a high-ranking corporate officer is best tasked with making calls and drawing red lines. As the technology plunges ahead, it’s going to be vital for organisations to come up with rules of the road for deploying AI so its interests are advanced, not threatened.”

Regulation

Regulatory pressures will intensify, with potential software bans on the horizon – Dr. Aleksandr Yampolskiy, co-founder and CEO of SecurityScorecard

Governments worldwide will create strict security regulations in 2025, requiring both organisations and their suppliers to follow enhanced safety standards. Some software, including open-source programs with known security flaws, may face outright bans. These regulations will make organisations responsible for thoroughly evaluating their software selections and supplier partnerships as governments take steps to protect critical infrastructure and reduce system vulnerabilities.

Governments will steer towards a new era of global regulatory harmonisation – Jeff Le, VP of global government affairs and public policy at SecurityScorecard

The year 2025 will mark a turning point in global governance as nations grapple with the complexities of regulating cyberspace. The sheer volume of disparate cybersecurity and data privacy laws has created a compliance nightmare for businesses operating across borders. 

The urgency for harmonisation has reached a tipping point. In response to these mounting challenges, there will be a growing push for greater regulatory harmonisation in 2025. Governments, international organisations, and industry bodies will unite to create consistent standards and frameworks that can be adopted globally, particularly among the United States, Canada, Australia, the United Kingdom, and throughout many Asian nations. It remains to be seen whether there can be closer coordination and regulatory reconciliation with the European Union. While progress may be slow due to political and economic factors, streamlining regulatory requirements will be essential for businesses to operate effectively and mitigate risks. 

Other

Ambient computing – James Bergin, EGM Technology Research and Strategy at Xero

The widening proliferation of wearables and always-connected devices that can respond to the context of their environment, including location, time, and user behaviour in a natural and seamless way. Small business owners could embed these technologies to improve the way they operate. Shop owners could walk through their store and, in effect, talk to their products and have their shelves talk back to them, e.g., through smart glasses that overlay highlights on products that need to be restocked.

Quantum computing and post-quantum cryptography – Brandon Leiker, Cybersecurity Leader at 11:11 Systems

 Forbes predicts that quantum computing will begin gaining traction in the mainstream business in 2025. This brings the risk of reaching what is referred to as Q-Day. This is the day that advanced quantum computing reaches the point of being able to crack encryption methods that are used to protect data and safeguard traffic on the Internet. To mitigate this risk, organisations need to adopt post-quantum cryptography strategies using post-quantum encryption standards released in 2024 by NIST.  

 Businesses will seek to break free from vendor lock-in to maintain agility and control under budget and regulatory pressures – Kevin Dunn, VP and GM of EMEA at Wasabi

In 2024, organisations across sectors such as airports, banks, and emergency services experienced significant disruptions due to cloud outages, highlighting the risks of relying on a single cloud provider. These outages, caused by factors such as network issues, cyberattacks, or faulty updates, have brought the topics of security and dependency on sole providers to the forefront of budget discussions – extending concerns beyond IT departments to executive decision-makers. 

With budgets under increasing scrutiny and regulatory pressures mounting, IT leaders are expected to turn to hybrid and multi-cloud strategies in 2025 to improve resilience and manage costs more effectively. Breaking free from vendor lock-in will be essential, as many major providers impose significant barriers, such as high data migration costs (e.g. egress and retrieval fees) and technical challenges when transitioning data back to on-premises or to another cloud. These constraints often force organisations to remain dependent on a single cloud storage solution, limiting their flexibility and innovation potential. 

To address these challenges, businesses will seek solutions that reduce internet latency and bottlenecks while ensuring fast, predictable performance. A resilient tech stack will allow companies to store and access their data exactly when and where they need it, enabling greater operational agility. By maintaining the freedom to switch between cloud providers, organisations can not only optimise performance but also ensure they are always leveraging the best available solutions, safeguarding their competitiveness in a dynamic business environment. 

Read more

Avatar photo

Anna Jordan

Anna is Senior Reporter, covering topics affecting SMEs such as grant funding, managing employees and the day-to-day running of a business.