While stating that next-gen chatbots like ChatGPT could “improve nearly every aspect of our lives”, OpenAI chief executive Altman identified possible “significant harm” caused globally by the technology and industry as his worst fear when discussing possible regulation.
Altman insisted that generative AI could play a key role in overcoming global challenges such as climate change and curing cancer.
However, concerns were shared before the Senate around possible influence on elections, financial markets, the job market, and the creation of “counterfeit people” by threat actors.
Related:
What is generative AI and its use cases? — Generative AI is the is a technological marvel destined to change the way we work, but what does it do and what are its use cases for CTOs?
How to prepare for the impact of AI everywhere — Generative AI is ushering in a new wave of computing in business – here’s how we can retain trust in and through AI while driving innovation.
When it comes to regulation going forward, Altman suggested a licensing scheme not for what models are capable of today, but in the future with artificial general intelligence; as well as safety standards, and independent audits.
“The US Government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities,” he told senators.
In addition, OpenAI’s chief executive stated that the presence of a human or AI user needs to be clearly distinguished for the benefit of consumers.
Comparing possible adaption to the presence of AI to the realisation that images can be Photoshopped, Altman said “this will be like that — but on steroids. The interactivity, the ability to really model [and] predict humans is going to require a combination of companies doing the right thing, government regulation and public education”.
IBM‘s chief of trust Christina Montgomery, also present before the Senate, said that mitigating AI harms “all comes back to transparency; disclosure of how we train AI and continuous [technology and organisational] governance”.
Cognitive scientist Gary Marcus, meanwhile, recommended measure similar to those enforced by the Food and Drug Administration (FDA), as well as funding towards safety research.
“Harmful request”
When asked by the Senate what would constitute a “harmful request” for the GPT-4 model, Altman cited violent content that encourages self-harm, as well as some adult content.
In response, Senator Mazie Hirono pointed to the capability for LLMs to produce false, realistic-looking images, a cited example being one of Donald Trump being arrested by police.
Misinformation, along with bias and other boundaries, were recently revealed to be on the radar of a ‘red team‘ hired by GPT-4 developer OpenAI.
The human touch
While the potential to replace jobs was discussed during the Senate hearing, Jeremy Rafuse, vice-president and head of digital workplace at GoTo pointed to the need to keep human employees involved in the training process.
“With the exponential rise of AI engines, public figures are calling for officials to ‘regulate before it’s too late’ over growing AI anxiety – will I be replaced by a robot? But we’re missing something here when questioning AI: the human touch,” said Rafuse.
“Humans have the innate ability to question when things aren’t quite right, taking active leadership in the way that their systems operate. Human support can offer empathy and emotional support to users who are frustrated, helping to build a stronger connection between users and the IT support team. It is only alongside human expertise that AI and advanced machine learning can run effectively.
“By training AI systems and incorporating human feedback, AI can improve its accuracy and responsiveness over time. This will bolster business IT capacities by reducing downtime and operating more efficiently.”