CMA chief executive Sarah Cardell announced today that the regulator would look into “the real opportunities” of AI foundation models, as well as the guardrails required to maintain fairness of competition across the market, reported the Financial Times.
A review is being launched to assess the technology that powers widely used chatbots like ChatGPT, and “how the markets around those models are developing”, stated Cardell.
As well as mitigating any anti-competition practices in the fast-evolving AI space, the CMA aims to keep consumers protected from possible harm.
While big tech firms including Microsoft and Google hold key stakes in the market in question — funding start-ups like ChatGPT vendor OpenAI and Claude developer Anthropic — Cardell said the CMA will not target “any particular companies”.
The watchdog executive went on to state that start-ups developing AI across the UK want “open and competitive markets where they can compete fairly and effectively”.
Cardell added that while the regulator isn’t “anti-digital mergers”, it acknowledges that there has been “historic underenforcement when it comes to merger control, particularly in tech”.
Reacting to Cardell’s announcement, Tim Wright, tech and AI partner at law firm Fladgate, is concerned that the CMA’s review of the UK AI market “will not cover a number of other issues raised by these foundation models such as copyright and intellectual property, online safety, data protection and security.”
Wright added: “It remains to be seen to what extent UK regulators feel it necessary to flex their muscles in these areas.”
The announcement from the CMA follows similar statements made by the EU and the US regarding regulation of artificial intelligence, with the impact of generative AI development on business and society continuing to grow.
Microsoft chief economist warns of possible “damage” by AI
Speaking to the World Economic Forum (WEF), Microsoft chief economist Michael Schwarz joined AI leaders like ex-Google ‘godfather of AI’ Geoffrey Hinton in admitting that generative AI could do harm in the hands of threat actors, reported The Telegraph.
Schwarz told Bloomberg: “It can do a lot of damage in the hands of spammers with elections and so on”, before going on to state that there is a need to regulate AI.
The admission from Schwarz comes following a meeting between executives at Microsoft, Google and OpenAI with US vice-president Kamala Harris to discuss the possible harms surrounding AI developments.
Related:
Ofcom proposes relaying of cloud market regulation to CMA — The Competition and Markets Authority (CMA) has been asked by Ofcom to take over investigation of the UK cloud market.
70 per cent of businesses currently exploring generative AI innovation — A Gartner poll has revealed that 70 per cent of organisations are currently in ‘exploration mode’ when it comes to generative AI innovation.