‘Godfather of AI’ resigns to warn about coming danger

One of the artificial intelligence industry’s top scientists Monday announced his resignation from Google to warn the public about the dangers of AI.

Geoffrey Hinton was nicknamed the “Godfather of AI” after his first breakthrough in the field in 2012. He and two other graduate students created technology based on a “neural network” that enables programs to analyze thousands of photos and instantly recognize objects. Google bought Hinton’s company for $44 million and the algorithm has since been used to build things like autonomous cars, which can recognize highway lanes, street signs, and other objects.

On Monday, Hinton told the New York Times that he has resigned from his position at Google because of the threat AI now poses, adding that a part of him regrets his work in the field.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” said the 75-year-old engineer. “It is hard to see how you can prevent the bad actors from using it for bad things,” he added.

This is not the first time Hinton has cautioned about AI. In March, Hinton admitted to CBS that the possibility of AI wiping out humanity is “not inconceivable”.

Hinton is also far from the only voice catastrophizing about the impact AI may have on humanity.

In a poll from last year of over 700 AI experts, about half say there is at least a 10% chance AI can lead to human extinction. 

In another poll of 44 people working in AI safety, respondents warned there is a strong chance (30%+) that the future will be “drastically less” than it could have been as a result of weak AI safety research.

The late scientist Stephen Hawking warned in 2014 that "[t]he development of full artificial intelligence could spell the end of the human race.”

His sentiment was echoed that same year by billionaire Elon Musk, himself a founder of AI corporation OpenAI, who cautioned that “with artificial intelligence we are summoning the demon.”

“We need to be super careful with A.I. Potentially  more dangerous than nukes,” Musk wrote.

Musk, along with Apple Co-Founder Steve Wozniak, was one of the tech industry leaders who signed an open letter in March calling for a moratorium on the development of AI systems, which they said is “out of control” and “dangerous”.

The letter notes the concern that “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asks.

“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 [OpenAI's latest version of ChatGPT]. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

AI expert Eliezer Yudkowsky, who has spent decades researching artificial intelligence and founded the Machine Intelligence Research Institute (MIRI), responded to the letter by urging tech industry captains to “shut it all down” and end AI development indefinitely.

“Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow,” Yudkowsky wrote in a Time article. “A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.

“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”

OpenAI CEO Sam Altman reportedly compared ChatGPT to the Manhattan Project that created the atomic bomb, which he said was a "project on the scale of OpenAI – the level of ambition we aspire to." According to the New York Times, Altman is “certainly determined to see how it all plays out” and, as his mentor says: “He likes power.”