Has humanity reached its limits to understand and manage accelerating change?

Immediately following the reinstatement of Sam Altman as CEO at OpenAI, Vinod Khosla, the company’s first institutional investor, spoke with Bloomberg TV’s Ed Ludlow. Behind the placid face of the legendary investor – who has earned enormous wealth betting on some of Silicon Valley’s most iconic companies – one could sense both frustration and relief with what had transpired since Altman’s firing.

“I didn’t in my remote imagination think that something like this could happen or that the board would do this. There were some errant people on the board who misinterpreted their own religion around effective altruism. I think this was errant, shocking, surprising in every way … totally unexpected under any circumstance.”

That was the view of many people in November 2023 when an unexpected end-of-year debate unfolded about the future of AI. On one side there were the so-called techno optimists. On the other were the techno-pessimists, as framed in the minds of billionaire investors like Marc Andressen who earlier in the year had penned a combative “manifesto” on the topic. In the wake of the OpenAI debacle, the debate that played out in the media featured these two opposing forces. Few people with more moderate views spoke loud enough to rise above the noise.

There’s something in Khosla’s commentary that’s been largely overlooked that helps to explain what happened. By training his sights on the “errant” behavior of the OpenAI board, Khosla suggested we might need to worry more about human hubris than robot rebellion. In his mind, the reaction of the board was irrational, religious (based on faith not fact). Never mind the robots; beware the errant humans. What happened with the board is 2001: A Space Odyssey in reverse. This time, the enemy isn’t HAL, the human-killing robot. As Pogo – the cartoon possum during the Cold War – famously declared, “the enemy is us.”

Martyrs and Messiahs

Is the moral of the story is that “tech geniuses make bad governance decisions?” Perhaps. But a broader view of what happened at OpenAI suggests that human beings may be reaching their limits to manage complexity, and that’s why they’re behaving irrationally.”

The stories we tell ourselves is the subject of most of my writing. And when those stories are of a religious – faith-based variety, I get particularly interested. In preparing for this piece, I spoke with a former colleague, Peter Hirshberg – who among other roles headed enterprise marketing for Apple reporting to Steve Jobs – about the “messianic” tone that many leaders were striking. Let’s start with Andreessen himself, who early in his manifesto proclaimed in a Jesus-like voice:

“[T]echnology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential. For hundreds of years, we properly glorified this – until recently. I am here to bring the good news. (emphasis added).”

No knock on Andressen, some of whose contributions – including the first commercial web browser – undeniably have transformed society for the greater good. And it’s hard to know whether he was being playful or facetious. Unfortunately, his argument is complicated by the fact that he credits people like Tommaso Marinetti – who as Vice noted was “one of the architects of Italian fascism” as a “patron saint” of techno optimism. It’s not a good look for a Silicon Valley VC. Even giving him the benefit of the doubt, it’s not an action becoming of someone in Andreesen’s position, validating the ongoing story of the “the hubris of the tech bros.”

In the meantime, says Hirshberg, OpenAI “was stacked with messianic people who believed they were saving the world.” There was talk about how chief scientist Ilya Sutskever – who originally voted to oust Altman, but quickly changed his mind – of making strange spiritual claims. In a November 19 article for the Atlantic, Karen Hao and Charlie Wertzel reported on an all-hands meeting where Sustkever led a chant invoking the much feared yet greatly anticipated arrival of AGI, artificial general intelligence that can perform any task a human can.

“Anticipating the arrival of this all-powerful technology, Sutskever began to behave like a spiritual leader, three employees who worked with him told us. His constant, enthusiastic refrain was ‘feel the AGI,’ a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: ‘Feel the AGI! Feel the AGI!’ The phrase itself was popular enough that OpenAI employees created a special ‘Feel the AGI’ reaction emoji in Slack.”

The spiritual and superhero superlatives kept coming. Upon Altman’s return to OpenAI, Wired wrote about his “second coming.” In film fantasy land, one journalist began casting the inevitable biopic (with Nicholas Cage as Elon Musk). And along with the messiahs came martyrs for the cause of the doomers. Helen Toner, the former OpenAI board member who helped lead the charge to boot Altman, got a harsh spotlight in a New York Times article about the chaos within the OpenAI board:

“Hours after Mr. Altman was ousted, OpenAI executives confronted the remaining board members during a video call, according to three people who were on the call. During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities. Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that ‘benefits all of humanity’, and if the company was destroyed, she said, that could be consistent with its mission.”

The idea that a massively financed for-profit could be the servant to a non-profit entity must have been shocking to investors including Khosla and Microsoft which owns nearly half the for-profit entity. But the elevation of Toner in early analysis of the board’s actions marked her for martyrdom for a lost cause. Long gone now, her silence only underscores her rebel identity.

“According to a December 2022 survey by Pew Research, nearly four out of ten American adults believe we are living in ‘the end times’… apocalyptic thinking is alive and well. But it’s secular apocalyptic thinking that should concern us when thinking about AI.”

Apocalypse, Now and Then

Is the moral of the story is that “tech geniuses make bad governance decisions?” Perhaps. A glance at the Rube Goldberg-inspired governance chart might persuade a lot of folks of that. The failure of the board to act in the interests of investors may have been the result of replacing Musk and Hoffman when they left, respectively, in 2009 and 2013. Or is it more, as Fortune’s Maria Aspan argued, that “when nonprofits and for-profits clash, the one with the money usually wins.” That capitalism always wins is almost always resonant, to capitalists and their opponents (by confirming their grievance).

But a broader view of what happened at OpenAI suggests human beings may be reaching their limits to understand and manage accelerating change, and that’s why we’re behaving irrationally. Ray Kurweil, the futurist, has written about our inability to grasp exponential change, the rate at which many modern technologies develop. This is a problem that extends beyond corporate leaders drunk with their newly found powers. It’s increasingly a problem for the people they’re supposed to serve.

According to a December 2022 survey by Pew Research, nearly four out of ten American adults believe we are living in “the end times.” When I first learned this, I recalled from my college studies how every 1,000 years, sectors of society – often the poorest and least educated – are possessed by the fear that the end is nigh. Scholars have labeled this phenomenon as millenarianism. We experienced it not long ago, in a secularized sense in the year 2000, with the Y2K scare. But historically, we can trace apocalyptic thinking to differing phenomena, between millenia, like the Native American Ghost Dance of 1890 (which I analyzed for my undergraduate thesis), Mormonism, and the New Testament which prophesized a period of strife before the second coming of Jesus. As the Pew survey shows, religious apocalyptic thinking is alive and well today. But it’s secular apocalyptic thinking that should concern us when thinking about AI. Why: the number of people is far larger.

Recently, a new phenomenon came to my attention. It goes by the name of “Apocalyptic Anxiety.” It’s become the focus of countless articles by health and wellness professionals united around the belief that the recent growth of apocalyptic thinking can be explained by real-world causes for their anxiety. This loosely fits with what psychologists call “anticipatory anxiety.”

The impact is two-fold. First there’s the physical harm that anticipatory anxiety can bring: emotional numbness, muscle tension and pain, trouble sleeping. But then there’s the long-term impact on the human psyche. An article by Kari Paul in The Guardian opined that “experts say the fallout from powerful AI will be less a nuclear bomb and more a creeping deterioration of society.”Similarly, in a contributed article bylined by Nir Eisikovits, professor of philosophy at UMass Boston, wrote that the real existential threat of AI is in the “philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.”

There’s another legitimate concern. Can the fear of AI distract us from real and present dangers like climate change and the rise of military conflict in the Middle East? In a June 2023 article for Wired entitled, “Humans Aren’t Mentally Ready for an AI-Saturated ‘Post-Truth World’,” Thor Benson interviewed philosopher Larry Rosen who “worries that AI will make people more reliant on technology. Humans like things to be as simple and easy as possible, to avoid stress, he says, so people might start automating every aspect of their life that they can.” He quoted Rosen: “I get concerned about the fact that we just blindly believe the GPS. We don’t question it. Are we just going to blindly believe in AI? … As it is, we’re overwhelmed. We’re so overwhelmed that we can’t make ourselves do a simple task and see it to completion. Anxiety is just going to ratchet up as we’re faced with this unknown thing in our world.”

So, the fear – a rational fear – is that AI may further lull humanity into a state of techno-dependence, robbing humans of one of the principal attributes of being human: agency. It conjures the memory of the 2008 animated film Wall-E, where in the future everyone is overfed and complacent as cattle.

The preogatives of narrative

“Already cooler heads are prevailing. In the business world, leaders are forging straight ahead, but guided under the rubric of ‘responsible AI.’Yet for the larger societal narrative, leadership has yet to emerge. How do we deal with the inevitable? Does the inevitable need to be told in either a utopian or dystopian narrative?”

Over the weekend, I saw that In a holiday issue of Scientific American, Daisy Yuhas wrote about the “comforts of the apocalypse.” Yuhas quoted Minnesota neuroscientist Shmuel Lissek who believes that “at its heart, the concept of doomsday evokes an innate and ancient bias in most mammals.” She continued:

“Lissek suspects that some apocalyptic believers find the idea that the end is nigh to be validating. Individuals with a history of traumatic experiences, for example, may be fatalistic. For these people, finding a group of like-minded fatalists is reassuring. There may also be comfort in being able to attribute doom to some larger cosmic order—such as an ancient Mayan prophecy. This kind of mythology removes any sense of individual responsibility.”

So, yes, it’s important to consider the therapeutic value of apocalyptic thinking. Lissek is not the first to suggest this – for many years, social scientists have written about the satisfaction doomers derive from processing a tale of good and evil. I might also explain the success of dramas like The Walking Dead franchise, and the film Don’t Look Up. But digging a little deeper, we can better appreciate the upside to apocalyptic thinking by understanding that “end is nigh” tales aren’t just about destruction. They’re also about renewal. The apocalyptic view in the dark ages gave birth to imagining a renaissance. The apocalyptic view in the New Testament mapped the coming of the day of divine reckoning. It pays to understand the original meaning of the word apocalypse from the Greek at the time of the writing of the New Testament: “to uncover, reveal, lay bare, or disclose.”

And therein lies the secular opportunity for this world – to uncover, reveal, and act on what we can do, when we more clearly understand the tangible opportunities and threats of AI. But are we up to the task? It’s often been quoted that “when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.” But what if a man had ten years, twenty years, thirty years? Would his thinking be more rational? Already cooler heads are prevailing. In the business world, leaders are forging straight ahead, but guided under the rubric of “responsible AI.” Yet for the larger societal narrative, leadership has yet to emerge. How do we deal with the inevitable? Does the inevitable need to be told in either a utopian or dystopian narrative?”

Let us remind ourselves that to be human means to have agency. That should help us find the right course. Until then, beware the errant humans and the unintentional harm they can bring.

Share.
Exit mobile version