Google Brain head and co-founder Andrew Ng has extensive AI leadership and academic experience from previously leading big data and AI research at Chinese AI start-up Baidu; and serving as computer scientist professor at Stanford University.
When discussing the risks of artificial intelligence, Ng expressed belief via Twitter that no “meaningful risk” to human extinction would occur in the future. This was in response to a statement made by safe.ai stating that mitigating risks to human life should be as high a priority to that of other societal risks such as pandemics and nuclear war, which was cosigned by fellow AI leaders including OpenAI CEO Sam Altman, and DeepMind co-founder Demis Hassabis.
According to Ng, speaking with VentureBeat, with generative AI still in its early stages, value driven by this particular kind of artificial intelligence could be overtaken by that of supervised learning – a technology cited as similar to GenAI, and having “tremendous momentum” behind it, still. However, Ng went on to state that with generative AI set to grow at an annual rate, it could be added to the suite already used by AI developers in the next few years.