As AI stakeholders continue discussing how the technology should be regulated following the AI Safety Summit at Bletchley Park, Sam Altman told the FT that OpenAI aims to “raise a lot more over time” from Microsoft and other investors, to aid its vision for AGI development.
According to the OpenAI chief, “there’s a long way to go” in this journey, with surging development costs and training expenses to be managed.
Training costs in particular have led to OpenAI remaining unprofitable, but Altman says revenue growth was healthy this year, and believes that the partnership with Microsoft would ensure “that we both make money on each other’s success, and everybody is happy”.
$10m AI Safety Fund launched by GenAI consortium — The Frontier Model Forum, consisting of OpenAI, Anthropic, Microsoft and Google, announces new funding towards AI safety, and a new executive director.
A GPT store was recently announced by Altman, to allow individual users to create their own customisable digital services, with the eventual aim to establish a revenue split model with popular creators, akin to Apple’s App Store.
GPT-5 is also in the pipeline, but the tech leader is yet to reveal an exact product release timeline.
Microsoft and OpenAI’s investment partnership was extended at the start of this year, with $10bn being put towards acceleration of AI computing and research development.
Widely cited as the next step in AI innovation, artificial general intelligence entails technology that surpasses human capabilities to perform tasks.
“The vision is to make AGI, figure out how to make it safe . . . and figure out the benefits,” Altman told the FT.
Extinction fears “unwarranted”, says Google Brain co-founder
Amidst regulatory concerns that AI could eventually replace human occupations, Google Brain co-founder and former Baidu chief scientist Andrew Ng said big tech companies are to blame for stoking “unwarranted fears about human extinction”, reported The Times.
“I really, really struggle to understand how AI could possibly lead to human extinction,” said Ng, before going on to suggest that regulators are being encouraged to put rules in place that stifle innovation.
He added that regulators should take the “massive upside” of artificial intelligence into account, and consider a scenario where “people live longer, healthier, more fulfilling lives”.
Four in five young tech staff confident about AI — 78 per cent of European tech staff aged 20-30 are embracing the use of AI, reveals HR software vendor HiBob’s Young Generation in Tech survey.
Ng’s comments on AI development contrast with many leaders in the space, including Altman, who had co-signed a statement comparing the technology’s risks to nuclear war and pandemics.
According to the Google AI founder, “DNA sequences of highly infectious viruses can already be obtained and it has nothing to do with AI.
“GPT-3 [the previous ChatGPT model] has been around since 2020, and where are the actual harms? There are some minor ones, but the degree of potential harm is very overhyped.”
Ng went on to describe arguments citing possible human extinction as “frustrating, because trying to prove AI won’t make humans go extinct is trying to prove a negative.
“I can’t prove it, any more than I can’t prove that radio waves won’t help space aliens to find planet Earth and wipe us all out.”
Related:
What top tech leaders are saying about artificial intelligence — With artificial intelligence (AI) rising up the corporate, regulatory and societal agendas, we gauge the views of the key tech leaders in the space.