Speaking at OpenAI’s Dev Day event, Sam Altman went over a lot of aspects of what’s happening right now with OpenAI’s products and other cutting-edge AI models, in answering questions from Harry Stebbings, founder of 20VC.

One interesting aspect of what’s possible with new reasoning models and other systems is AI’s ability to code: throughout his interview, Altman kept talking about how much progress we can make toward no-code solutions. Referring to the “next turn of the model crank,” he suggested that there are parts of the industry that newer models will steamroll, where companies that are making tweaks around the margins are likely to lose out.

“We believe that we are on a quite steep trajectory of improvement, and that the current shortcomings of the models today will just be taken care of by future generations. And I would encourage people to be aligned with that.”

Altman suggested companies should be focused on creating new products and services, not trying to fix problems that may be obsolete in as little as a year or less. One of the surprising things, he said, that happened toward the beginning of this march toward better models is that companies were betting against the models themselves getting better. That trend, he said, has now been inverted as people realize how great systems are and how rapidly they improve.

“I felt like 95% of people were betting against the models, and 5% of people betting for the models getting better,” he said. “I think that’s now reversed. I think people have … internalized the rate of improvement, and have heard us what we intend to do. So it no longer seems to be such an issue, but it was something we used to fret about a lot.”

Agentic Artificial Intelligence: What Does It Mean?

Later in the interview, Altman went on to talk to the nature of what’s called ‘agentic’ AI. We’re hearing a lot about this now as the systems become able to perform tasks in human-like ways.

Altman defined agentic AI as something that a human can give a “long-duration task” to, letting it operate with minimal supervision.

He also gave a practical example – rather than just booking your reservation at a restaurant, he suggested these systems might be able to check with 200 or 300 restaurants to find the optimal diet and cuisine time for dining, etc., or alternately, become your “senior coworker” who can help reinvent business processes on the fly.

That led to questions about the replacement of human labor.

Much of these predictions, he said, is speculation, because we don’t know exactly how things are going to shake out.

The Value of Training

Discussing the training of models, Altman admitted that it may be hard for some companies to get return on investment, but he pointed to situations where things like a positive cumulative effect of multiple model trainings will justify investment for enterprise.

Open AI, he noted, is fairly insulated, because it has ChatGPT with legions of users, so company leaders don’t have to worry about whether they’re getting enough bang for the buck out of their training or any other process.

Other companies, he said, may have to operate from a standpoint of real analysis of the industry itself.

As for how to do this kind of planning, Altman provided the following, interjecting some of his own experience:

“How do you balance what has to happen today, or next month, with the long-range plans, … to execute in a year or two years, with build out (or) compute, or things that are more normal, like planning ahead enough for like office space in the city of San Francisco, … I think there was either no playbook for this, or someone had a secret playbook they didn’t give me for all of this, like we’ve all just sort of fumbled our way through this, but there’s been a lot to learn on the fly.”

He also enumerated some other problems that companies have mostly been able to conquer as advances continue: bad model behaviors, failed paradigms, intractable problems. Invoking the language of the Beatles, he suggested it has been a “long and winding road” to progress.

“There was definitely a time period we just didn’t know (how we were) going do that model,” he said of ChatGPT 4.

But all of it, he said, seemed to be guided by something positive.

“We have a lot of people here who are excited to build AGI,” he said. “That’s a very motivating thing. But there’s a famous quote … it’s something like… the spirit of it is (like) ‘I never pray and ask for God to be on my side. … I pray and hope to be on God’s side.’ And there is something about betting on deep learning that feels like being on the side of the angels, and you kind of just- eventually, it just seems to work out.”

In discussing the heavy decisions that confront leaders in these times of change, he mentioned a large number of what he called “51-49” decisions, close decisions with little clear favor, that tend to wind up on his plate. Altman mused about how to crowdsource input from a number of different people, and why that has worked well for him over the years.

On the issue of semiconductor supply chains, Altman said he’s worried, but it’s not his top worry, although he characterized this hardware issue as being “in the top 10% of all top worries.”

The Biggest Worry in Tech: Also: Applications, and Predictions

His top worry? The generalized complexity of systems.

“It feels like it’s all going to work out, but it feels like a very complex system,” he said.

In terms of good future use cases, Altman suggested AI-enabled verticals, such as tutors, and the full human potential of applications, like an AI that “understands your whole life.”

And then there was his prediction for the future:

“In five years, it looks like we have an unbelievable rapid rate of improvement in technology itself,” he said. “You know, people are like, ‘man, the AGI moment came and went,’ … and we’re discovering all this new stuff, both in AI research, and also about all the rest of science. … And then the second part of the prediction is that society itself actually changes surprisingly little. An example of this would be that if you asked people five years ago if computers would pass the Turing test, they would say ‘no.’ And then if you said, ‘Well, what if an Oracle (told us this would happen, they would say) ‘it would somehow be this crazy breath-taking change for society.’ And we did kind of satisfy the Turing test in a manner of speaking, of course, and society didn’t change that much. It just sort of went whooshing by. And that’s kind of the example of what I expect to keep happening, which is progress, scientific progress keeps going, and it will outperform all expectations in society, in a way that I think is good and healthy.”

That’s a lot of the highlights of this timely interview on models, on OpenAI’s progress, and the progress of the industry, as we wait for the full rollout of o1, and the release of Orion, probably next year, and everything else that has techheads all a-twitter. Meanwhile, we saw an o1 leak this week that brings new reasoning models more into the forefront of human thought, where, assuredly, they will stay for quite a while. The other big news right now is that a lot of these companies seem to have delayed any more rollouts until Americans visit the polls. And that’s happening. Watch this space.

Share.
Exit mobile version