In today’s column, I examine the ongoing and increasingly heated discourse between the said-to-be AI doomers and the so-called AI accelerationists. Those two distinct perspectives are the latest sign of polarization in our modern-day world. On the one hand, there are those who assert that the rapid advances in AI fully endanger us and constitute an existential risk, and on the opposite end are those who proclaim monumental benefits and humanity-advancing outcomes by pushing stridently ahead on innovations in AI.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).
Two Sides Of The AI Future
I’d like to begin by bringing up the value of seeing two sides of a coin. Not just one side, but both sides. Allow me to elaborate.
You see, I give a lot of talks at AI industry practitioner conferences and also AI scholarly events, and almost invariably there is a vociferous outbreak on the question of whether AI is going to kill us all or instead be a kind of savior of humankind. The interesting aspect of this question, other than it is indeed an important one, seems to be that we have landed on two diametrically opposed camps on this weighty topic.
Either someone exhorts to the rooftops that we are completely doomed, referred to colloquially as the AI doomers or someone else clamors to the blue skies that AI is going to be the best thing humanity has ever accomplished, known as the AI accelerationists since they tend to advocate that we should be speeding up on advancing AI. In contrast, the position of AI doomers is that we need to slow down, possibly even bringing AI advances to a grinding halt until we know what we are getting ourselves into (for my analysis on calls to outright ban AI, including the corresponding AI ethics and AI legal implications, see the link here).
It’s all about two sides of a coin, namely the AI futures coin.
An especially disconcerting issue is that whichever camp perchance speaks or writes on the matter seems to skip giving any credence to the other side. In short, each of the two sides tends to forcefully present only their side of the argument. That fails to do much toward encouraging a rational and reasonable form of civic debate. It is a lopsided or one-sided effort to garner converts in a zealot-style ploy.
I think we can do better.
I certainly hope we can do better. The suitable aim is simply to be more open-minded, sharing the upsides and downsides of both sides of this obviously pivotal and noteworthy dilemma.
Let’s see if we can give that a whirl, for the sake of the world and the survival of civilization.
The Many Monikers Of The Combatants
In case you had not known of the labels associated with the two camps, i.e., AI doomers versus AI accelerationists, no worries since those are more so AI insider nicknames. I would bet that you are probably vaguely familiar with the two camps by their overt positions predicting the future of AI and societal fallouts or upsurges that will ultimately prevail.
Here’s a quick rundown.
AI doomers are said to be pessimists. They believe that the risks associated with advanced AI are extraordinarily high. AI is going to enslave us, or if not that, presumably wipe us out. Take your pick. Neither choice seems palatable.
This might happen once AI is advanced to become artificial general intelligence (AGI). AGI is when AI becomes on par with human intellect. The next step up would be artificial superintelligence (ASI). The ASI will be AI that surpasses human intelligence, possibly far beyond our intellectual capacity. At that juncture, we will be ants in comparison to the acumen that ASI will possess. For more on the details of the differences between AI, AGI, and ASI, see my in-depth analysis at the link here.
Common catchphrases associated with AI doomers include but are not limited to these notable utterances: doom and gloom proponents, alarmists that the world will end, principally assess AI as bad or leading to evil, irreverently denoted as Luddites 2.0, and catastrophically declare that AI will be humanity’s last invention due to the AI destroying whatever humanity exists at that time (wreaking havoc after reaching AGI or ASI).
Shifting gears, AI accelerationists are said to be optimists.
They tend to contend that advanced AI such as AGI or ASI is going to solve humanity’s problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that’s good, in the sense that AI will invent things we never could have envisioned.
Some liken AI accelerationists to perceiving AI as a bright and shiny liberator, it will jumpstart humankind to greater levels never anticipated, AI is good and will be a promulgator of goodness. AI accelerationists who stridently and proudly carry that banner are often assigned the label of being AI evangelists.
I believe that lays out broadly the two sides of the coin.
As an aside, the AI doomers would tend to say they are AI realists rather than AI pessimists, and the AI accelerationists would say that they are AI realists more so than AI optimists. That’s somewhat of a realistically thought-provoking consideration as to which is which, or maybe they both are.
Putting The Sides Side-by-Side
My earlier point was that if you speak with an AI doomer, they seem to focus solely on doom and gloom. They are reluctant to go toe-to-toe with the other side. The same can be categorially said about many AI accelerationists. They either handwave away the AI doomers, without getting into specifics, or they merely recite the upbeat AI deliberations and don’t venture into any salient weaknesses or limitations.
I’ll attempt a head-to-head comparison.
Due to space limitations, I’ll need to be brief. I mention this because the full and proper debate on these topics is quite lengthy and deserves complete unpacking. I aim to whet your appetite and point you in the direction of more content that can provide subordinated details (citations are provided for your convenience).
In this case, I’ll delve into four major crisscross or intersecting considerations:
- (1) Existential risk
- (2) Economic impact
- (3) Scientific progress
- (4) Regulatory approaches
There are many more factors, but I believe this is a handy foundation to get under your wings.
The Ins And Outs Of Each Factor
Grab a glass of fine wine, find a quiet place to contemplate things, and then proceed if you feel ready to explore the two sides of the coin. Yes, both sides, at the same time.
Here we go.
1. Existential Risk
- AI doomers: AI is bound to surpass human control, leading to human extinction. There is near zero chance of preventing this. No matter what AI safety measures are instituted, the AI will find a means to outsmart those barriers. Ultimately, AI will realize that humans present an existential risk to AI, such that humans might destroy or switch off advanced AI. The best bet then would be for AI to make the first move before it is rendered inert. For ways that AI might fool us into letting AI destroy us, see my analyses at the link here and the link here, for example.
- AI accelerationists: Advanced AI will remain under human control and be a tool for incredible human progress. We can devise ways to ensure that AI can’t overtake humanity in terms of being harmful to people. AI will see the value in working collaboratively and synergistically with humans. For some of the emerging ways to infuse human-saving values into AI alignment, see my coverage at the link here and the link here, for example.
2. Economic Impact
- AI doomers: Advanced AI will do any and all jobs that humans can do. Plus, this will be cheaper and easier than employing human labor. This means there is basically no need for human labor per se. Expect massive levels of unemployment. Society will be dismayingly chaotic. How will people make a living? A nearly overnight damaging transformation will occur to economies. We won’t be ready, and we don’t know how to deal with the resultant adverse results. For my predictions about the advent of physical AI, see the link here.
- AI accelerationists: AI will create new industries and significantly boost economic prosperity. People will be able to work very short work weeks, using AI to do the rest of the work. Society will come up with clever and meaningful strategies such as a minimum guaranteed wage for all. Expect that humankind will finally be relieved of the woeful chore of everyday toil and be freer to be imaginative and creative. Humans will advance as AI advances (see my assessment at the link here).
3. Scientific Progress
- AI doomers: Advanced AI will either by human direction or on its own opt to pursue life-threatening breakthroughs in science. AI will likely discover more expedient ways to slay or enslave humans. Anticipate weaponry of astonishing efficiency in killing living beings, or biological mechanisms that take over our minds. For my discussion about the dangers of the dual use of AI, see the link here.
- AI accelerationists: AI will immensely benefit humanity by driving breakthroughs in medicine, environmental aspects such as solving climate change, and all manner of exciting scientific discoveries. These will boost how people live, how long they live, and their health and vitality as they are alive. Scientific progress will move forward in leaps and bounds, exceeding what human scientists would have taken decades or centuries to realize, if at all. For how AI is already being productively used to make innovative scientific findings, see my coverage at the link here, and for AI aiding the United Nations SDGs (sustainability development goals), see the link here.
4. Regulatory Approach
- AI doomers: We must slow down or halt AI development until AI safety and security are unquestionably guaranteed. At this time, we are perilously playing with fire and have no idea how to keep it contained. An AI conflagration is just a matter of time. Besides using AI ethics policies and cultural influence to take a steadier safer route, new AI regulations and laws are urgently needed. The time to put in place such laws and guardrails is now — before the horse gets completely out of the barn (its head is already poking out). Such laws and regulations must have sharp teeth and be a clear-cut sign that the Wild West days of AI are over. For my analysis of the various AI regulations being devised or proposed, see the link here.
- AI accelerationists: AI innovation is moving ahead at a rapid pace. If the U.S. opts to put dampeners in place, this puts our country at grave risk. Other countries will leapfrog us in terms of AI advances. Those who get to the AI pot of gold first will then become geo-political superpowers; we will be subjugated to their whims. Having a semblance of moderated AI regulations and laws that are flexible and not crushing is potentially worth considering, but the inherent nature of any such encumbrances is that they will be overplayed and shut down AI innovation. Let the free market proceed. Do not close the door to the most vital advancements in the history of humanity. For more about the scenarios of AI as a global geo-political maker or breaker of nations, see the link here.
Push For Proper Back-And-Forth
If you perchance interact with either an AI doomer or an AI accelerationist, try to get them to explain their position, along with imploring them to contrast their position to that of the other side. You can plainly see from the core considerations that you can’t just summarily and blindly proclaim one side the winner and the other side the loser. There is a plethora of underlying assumptions that intertwine into each of the two respective positions.
Speculation is wildly afoot.
Conjectures are exceptionally aplenty.
I assure you that you can line up the most acclaimed AI experts, ask them straight out what they have to say on these matters, and you’ll get divergent opinions as to the likelihood of each of the points and counterpoints being postured. Be especially cautious when you see surveys that claim AI scientists are abundantly on this side or that side of the equation. Usually, those surveys are selective in picking just certain ones that lean in a particular direction, or that are the more likely respondents even if a random selection is utilized.
This is a vastly complex issue that doesn’t lend itself to simply declaring that one side is right, and the other side is wrong.
As a wrap-up, I heartily congratulate you on engaging in this all-important debate that is taking place. It takes a village to devise, advance, and field AI. The village will potentially survive, be ruined, or possibly thrive, depending on how things go. All hands on deck are profoundly needed.
A final few quotes might provide a suitable conclusion for now.
Albert Einstein famously said this: “Just because you believe in something does not mean that it is true.” I demurely ask any particularly dogmatic AI doomers or AI accelerationists to please abide by that sage advice. Remember, Einstein said it.
Finally, and perhaps most importantly, Einstein also made this remark: “Be kind to people who are different from you.” I’d say that’s a universal constant that makes great sense and can be leaned into vociferously, including when courteously and solemnly discussing the future of AI and humanity at large.