Only a handful scientists on earth ever earn the distinction of winning a Nobel Prize, a once-in-a-lifetime achievement forever etched into the annals of history.

This made it that much more extraordinary that Geoffrey Hinton spoke for barely more than a minute before trashing the CEO of OpenAI during an ad hoc press briefing convened in honor of his award. 

Running on just two hours of sleep, the clearly humbled computer scientist said he had no idea he was even nominated for the reward. After thanking two of his chief collaborators over the years—Terry Sejnowski and the late David Rumelhart, both of whom he called “mentors”—Hinton went on to acknowledge the role his students at the University of Toronto have played over the years helping realize his life’s work.

“They’ve gone on to do great things,” he said on Tuesday. “I’m particularly proud of the fact that one of my students fired Sam Altman. And I think I better leave it there.”

Nearing one-year anniversary of boardroom coup

Hinton was referring to Ilya Sutskever. The former chief scientist at OpenAI joined Helen Toner and two others on the controlling nonprofit board to sack their CEO in a spectacular coup last November. Sutskever quickly regretted his role plunging OpenAI into crisis and Altman was returned to his role in a matter of days.

Hinton and Sutskever had teamed up with Alex Krizhevsky in 2012 to create an algorithm that could identify objects in images it was fed with a degree of certainty unheard of at the time. Dubbed “AlexNet”, it is often referred to as the Big Bang in AI. 

Often called one of the godfathers of artificial intelligence, Hinton praised the work of his peers Yoshua Bengio and Yann LeCun before making repeated self-deprecating remarks. These included admitting that as a young student he left the study of physics—the field in which he was recognized by the Nobel committee—since he couldn’t handle the math.

The news of Hinton’s award comes weeks away from the first anniversary of Altman’s brief, stunning and ultimately unsuccessful ouster—as well as the second anniversary of the launch of ChatGPT at the end of November 2022.

OpenAI’s generative AI chatbot sparked a wave of interest in the technology, as the broader public began to realize for the first time that machines might surpass mankind in intellect this generation.

“Quite a few good researchers believe that sometime in the next 20 years AI will become more intelligent than us and we need to think hard about what happens then,” Hinton said on Tuesday.

Concerns over the safety of Artificial Intelligence

Altman is a controversial figure in the AI community. He’s been dubbed a liar by ex-OpenAI board member Helen Toner and starved his AI safety team of resources, according to a now departed team leader.

Altman is currently looking to shed OpenAI’s nonprofit status as he races to monetize its technology, creating deep divisions within the organization. This has sparked an exodus of researchers at the company focused on aligning artificial general intelligence with the interests of humanity as Earth’s still-dominant species. 

Asked about his disparaging remark towards Altman at the very beginning of the briefing, Hinton explained his reasoning.

“Over time it turned out that Sam Altman was much less concerned with safety than with profits,” he said, “and I think that’s unfortunate.”

Hinton calls for urgent research into AI safety

Luminaries like Hinton, 76, fear putting profit over ethics is inherently dangerous at the current juncture.
It is already difficult for scientists to predict how today’s most advanced AI models, with their trillions of parameters, actually arrive at their outcomes. Effectively they are becoming black boxes, and once that happens it’s increasingly difficult to ensure humans maintain supremacy. 

“When we get things more intelligent than ourselves, no one really knows whether we’re going to be able to control them,” said Hinton, who pledged to devote his efforts to advocating for AI safety rather than spearheading frontier research.

This risk from unknown unknowns is why California’s state legislature proposed an AI safety bill that was the first of its kind in the United States. Influential Silicon Valley investors like Marc Andreessen lobbied heavily against the bill, however, and it was ultimately vetoed by governor Gavin Newsom last month. 

Asked about the potentially catastrophic risk posed by an out-of-control AI, Hinton admitted there was no certainty.

“We don’t know how to avoid them all at present,” he said, “that’s why we urgently need more research.”

Recommended newsletter
Data Sheet: Stay on top of the business of tech with thoughtful analysis on the industry’s biggest names.
Sign up here.
Share.
Exit mobile version