In the afternoon portion of our summit in Davos, we had a very interesting discussion of the future of AGI (or call it something else if you like, as some of the panelists suggest… we’ll look at that) in which we had we had Yann LeCun, Meta VP and Chief AI Scientist, and Daniela Rus, the Director of our MIT CSAIL Lab. We also had Connor Leahy, CEO of Conjecture, and Stuart Russell from Berkeley. The topic? Keeping AI Under Control (what a can of worms!)
First, though, there was a moment at the beginning where you get a sense of just how evolved AI already is, specifically, in voice cloning, which we’ve only begun to think about in terms of ramifications. The moderator, Max Tegmark (also at MIT) asked each of the panelists to say their name – and from those few syllables, he was able to generate some very elaborate voice clones of them giving opinions that were presumably made up!
After that, the panel started talking about how to control these very powerful technologies…
“Human intelligence is very specialized,” LeCun said. Instead of AGI, he added, we should be talking about “human-level AI” and, he suggested, we’re not anywhere near there yet. The process, he said, would be to get machines learning as efficiently as humans and animals.
“This is useful,” he said. “This is the future – but we need to do it right.”
“We do want to improve our tools,” Rus said. “We want to try to understand nature — the way forward is to start with a simple organism, and go from there.”
Some weren’t so quick to pave the way, at least not in the same time frame. Russell pointed out that there’s a difference between knowing and doing, in aid of speaking to our need to move cautiously.
“That’s an important distinction,” he said of the two fundamental principles that engineers are going from, adding that there should be limits on what we know, limits on what we do, and limits on how we use what we know in the form of technologies.
He gave the example of each human person having an LLM in their pocket that can think in a human-like way. I was thinking: we do take advantage of that new opportunity, but what will it mean?
“Should everyone have that capacity?” he said, “(and) is it a good idea to build systems that are more capable than humans?”
LeCun suggested it’s too early for these kinds of questions, saying we don’t currently have a “blueprint” for a human-level AI system. “It’s going to take a long time,” he said, comparing current discussion to debates about future technology in 1925.
When we have a blueprint for the technology, he suggested, we’ll also have a blue blueprint for control.
“Evolution built us with certain drives,” LeCun noted. “We can build machines with the same drives … you can set the goals for AI, and it will fulfill those goals.”
The idea that AI will somehow “take over humanity” he found “preposterous” and “ridiculous” – though that is really what’s keeping a lot of other people awake!
Meanwhile, Stuart talked about a “defective methodology” that, if misapplied, could be catastrophic, and illustrated how some of these plans can go wrong.
“It becomes impossible to specify that objective correctly,” he said, painting a picture of a scenario where we might not be able to fully guide the process. “We are pushing ahead, yet we have absolutely no proposal for how to make these systems safe and beneficial.”
Leahy added this: “What makes the technology useful is what makes it dangerous,” he said, comparing AI to other technologies like nuclear weapons, or even bioweapons. “The best and the worst things can happen.”
LeCun responded that we can imagine all kinds of dystopian scenarios, but past technologies had prototypes that we were able to control, and AI may be the same.
“There are mechanisms in society for stopping the deployment of technology that’s really dangerous,” he said.
When Russell suggested, again, that it’s reasonable to look at the risks of AI, Rus agreed, but talked about how some solutions to ML problems have been solved, to some extent, for example, covering the threat of bias in these systems.
“There is really excellent progress,” she said. “I am… bullish about using machine learning and AI in safety-critical applications.”
Here’s an interesting part that occurred at the end, as we were preparing for the next session:
Panelists also individually called for new architectures, in response to Tegmark’s prodding.
LeCun talked about “objective-driven AI” with “virtual guardrails” that is not open to hijacking by black hats.
Rus talked about “liquid networks” that, she suggested, have some good attributes, like being provably causal, and interpretable, and explainable.
Leahy talked about “social technology” that doesn’t bypass that huma-in-the-loop idea.
“The world is complicated,” he said. “This is both a political and a technological problem.”
I came away from this session thinking about the overall question: will we be able to harness AI to our advantage? And what happens if we can’t? These panelists each have a lot of experience and insight. It’s worth spending time on, and thinking about.