Geoffrey Hinton, a British-Canadian computer scientist who is hailed as the ‘godfather of artificial intelligence (AI)’ due to his work on artificial neural networks and deep learning, has left his job at Google and spoken out about the dangers of rapidly-advancing AI.
Hinton has worked part-time for tech giant Google since March 2013 when his company, DNNresearch Inc., was acquired. In an interview with the New York Times, Hinton said “I console myself with the normal excuse: If I hadn’t done it, somebody else would have…It is hard to see how you can prevent the bad actors from using it for bad things.
“Look at how [AI} was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.
“The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
In the interview, he also expressed regret for his work and that he left Google so he could ‘speak freely’ about his concerns.
Today [2 May 2023], Atlas VPN also announced that new research showed that more than a third of AI experts believed that AI could lead to a nuclear-level catastrophe within this century. The findings were part of Stanford’s 2023 Artificial Intelligence Index Report, which was released in April 2023.
So far, Hinton’s decades-long academic career has been dedicated to the development of technology and principles to enable the use of AI.
One of the chief worries that prompted that prompted his resignation was that competition between massive technology companies has led to rapid and premature releases of new AI, which could lead to job risks and the proliferation of misinformation.
Jeff Dean, lead scientist for Google AI, said in a statement to US media: “As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI.
“We’re continually learning to understand emerging risks while also innovating boldly.”
Last month [March 2023], Elon Musk and more than 1,000 technology experts in the field signed an open letter calling for a six-month pause in the development of AI due to potential risks to humanity and society.