As Sam Altman mentioned in his talks at MIT and Cambridge recently, it is a uniquely interesting time for entrepreneurs and others who are interested in the power of AI.
That made us think – why not use AI to analyze Altman’s remarks, and present the findings?
So I put excerpts from both of his lectures into his own tool, ChatGPT, to see what came out.
First, I asked ChatGPT to analyze the transcripts of both talks, and report on various aspects of what he said at the two schools.
Here’s what came back, in addition to longer sets of points and key themes from each of the talks:
Content:
- At Harvard, Altman emphasizes the unpredictable nature of AI progress and the need for safe and aligned research. He highlights OpenAI’s focus on developing AI that is “pretty smart” without constant human oversight and its mission to transform education.
- At MIT, Altman focuses on AI’s integration into society, comparing it to past technological revolutions. He stresses the importance of startups building enduring businesses and discusses OpenAI’s efforts in developing more capable AI systems while prioritizing safety. He also highlights the potential of AI to accelerate scientific discovery.
Overlap:
- Both talks underscore the importance of AI’s transformative potential and the need for safety in its development.
- Altman encourages young entrepreneurs and researchers to pursue their goals with conviction and to take risks in the rapidly evolving AI landscape.
Now, in “Content,” we see that ChatGPT puts more emphasis on Altman’s cautionary remarks in the Harvard talk. In the MIT talk, that point is relegated to the second to last sentence where it mentions “safety.”
In “Overlap,” the AI notes that both talks included thoughts on “the need for safety in (AI) development.”
Keep in mind I heard the full audio of both talks. And I agree with ChatGPT, that Altman said a bit more on AI safety at Cambridge. But in boiling things down to the level that ChatGPT delivered, we might be missing some of the nuances of how Altman addressed the need for caution, deliberation, regulation, etc.
In fact, if I have one overall criticism of the ChatGPT approach, it’s oversimplification. That, and tone or style of writing.
In other words, in terms of content, there was so much more to what Altman said! A lot of quotes, brilliant insights, etc. that deserve to see the light of day (check out my write-ups on each segment of both talks.)
Also, all due respect to ChatGPT, this section seemed sort of useless:
Delivery:
- At both events, Altman delivers his insights with a measured and deliberate speaking style, emphasizing key points with pauses.
- He engages with the audience, acknowledging their aspirations and concerns.
Yes, presumably, he did those things. And yes, from what I heard, that’s a fair characterization. I don’t know what ChatGPT would have expected: hectoring the audience? Rapid-fire monologues?
It’s almost a little humorous.
The final contrast was perhaps more salient:
Contrast:
- The Harvard talk delves more into the specifics of OpenAI’s research approach, emphasizing language models and robotics, while the MIT talk focuses on AI’s broader societal implications and its impact on startups and scientific discovery.
- At Harvard, Altman discusses the future of education and the importance of AI proficiency for students, whereas at MIT, he talks more about the potential job impacts of AI and the need for balanced regulations.
Here, ChatGPT gets high marks for accuracy. All of that is mostly what I heard, too. For example, see our piece on the ChatGPT Origin Story. Where I would give ChatGPT less of a gold star is, again, in tone. Its delivery is kind of bloodless (which makes sense for a sentient AI with no actual blood, or brain) – and the sense that it likes to use the word “delve” a lot. Also, some of the report is pretty nebulous. A focus on “AI’s broader societal implications and its impact on startups and scientific discovery” sort of begs explication: what did he say about that?
So, if you want the bird’s-eye view, balanced with precision and brought to you in perfect “King’s English,” this article was for you. If you want a human response – just look through the blog feed.