I steer AI strategies for the UK Gov, FCA and PwC. Here’s what they’re doing right

The four approaches that the most promising AI projects have in common, to help other businesses leverage their tech securely and effectively

AI’s capacity to improve business performance and productivity is no secret. So, naturally, many companies are eager to keep on top of new tech. Yet, despite 82 per cent of businesses having already invested in AI strategies, over half of business leaders still aren’t sure how to use the technology effectively.

A third of AI projects are at risk of being abandoned after the proof-of-concept phase as a result. Considering how much there is to gain for companies that get AI right, this would be a great loss. 

Getting AI right requires a considered approach. In other words, companies must identify the right use cases, the right AI tools for said use cases, and the right data to underpin models to unlock the technology’s full potential. Drawing on my years of experience working with government bodies, global corporations, and FTSE 100s, and what I’m hearing from leaders tasked with operationalising AI in their businesses, I’ve shared below the four approaches that the most promising AI projects have in common, to help other businesses leverage the tech securely and effectively. 

They prioritise strong data foundations 

Garbage in, garbage out; the phrase is well-known in the tech world. An AI model’s output depends on the quality of its input, so if you have poor quality or incomplete data feeding an AI model, you’re unable to get the most out of it. Less than half of data leaders say their organisations have the right data foundation for Gen AI. 

The problem is that most companies’ digital infrastructure is made up of different, disjointed systems that don’t talk to each other. This means AI that is deployed in one area of the organisation is unable to access information stored in other systems, which is needed to generate the most accurate and relevant results. 

That’s why organising and interconnecting data sets a solid foundation for the best AI projects. We advise companies to ensure that their data is properly classified, labelled, and stored. This ensures the right people and tools can access the right information, sensitive data is locked down, and AI’s outputs are properly informed. Crucially, it enables companies to get a complete picture of their data, so they can get the most out of the tech and use it to enhance decision-making. Automating this process is the most effective and efficient approach.

They start with: “What would I ask of AI?” 

AI tools that summarise documents may be useful, but if this is the only way a business uses AI, they risk severely underutilising its capabilities. Instead of choosing tools based on their popularity or marketing appeal, AI project leaders who want to truly maximise the technology’s capabilities should think first about what they would ask of AI.

In other words, identify the right use cases first, before considering what kind of AI they want to use. For example, we’re helping manufacturing companies use AI to find and explain technical information in manuals and service logs, to improve customer query response times. Other businesses may want to help sales teams identify top customer prospects faster. Taking a business-led approach means companies can invest in AI projects that add demonstrable value to day-to-day operations, and prioritise ensuring data quality in these areas first to get the most out of the tech. 

They put data security first 

Some AI, including popular tools like ChatGPT, are cloud-based. The issue with sending corporate data to a public cloud is that you’re potentially left open to loss of IP. Plus, many generative AI models are trained on the information inputted into them, which could put company data at risk. Even when employees aren’t directly sharing private company information, these tools can potentially start to learn about the company and its objectives through dialogues with staff asking work-related queries. 

By using AI models that can be run privately, you’re able to leverage generative AI securely and avoid the risks of using public models with your private data. This enables companies to retain control of their data, keep confidential information watertight, and remain compliant – while reaping the full benefits of AI at the same time. 

They are underpinned by education 

A sizeable 40 per cent of workers say they don’t know how to effectively use gen AI at work. This knowledge gap can severely hinder progress with AI, and means staff and the business miss out on productivity gains. It could also cause security risks. For example, if staff input sensitive data into public AI tools like ChatGPT. 

That’s why the most promising AI projects are underpinned by education. Research carried out by Aiimi shows that almost half of UK business leaders are investing in developing AI skills within their workforces for this reason – so teams can better understand how and when to use AI. Informed employees will also be able to help identify where AI could be used to support them and the business. So, skills training and clear guidelines for staff around safe AI use must form a key part of any AI strategy. 

The potential value of AI is growing as fast as the technology is advancing. Organisations that take a considered, business-led approach to AI will be able to truly reap its rewards. 

Steve Salvin is the CEO & founder of AI data insights specialists, Aiimi.

Read more:

Related Topics

Artificial Intelligence