This week, Anthropic CEO Dario Amodei published a 38-page essay warning that civilization faces “real danger” from superhuman AI arriving as soon as 2027 — describing it as potentially “the single most serious national security threat we’ve faced in a century, possibly ever.”
The essay, titled “The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI,” indicates a tonal shift from Amodei’s October 2024 piece “Machines of Loving Grace,” which painted an optimistic vision of AI curing diseases and extending lifespans, as well as a recent Anthropic index that suggested a more nuanced future for the relationship between AI and employment than more doomsday scenarios present.
That earlier 14,000-word manifesto imagined AI compressing a century of medical progress into 5-10 years, eliminating cancer and infectious diseases while addressing mental health challenges. This new essay reads less like a vision statement and more like a warning shot across the bow of policymakers, tech leaders and consumers who Amodei believes are dangerously unprepared for what’s coming.
Amodei situates the essay around a scene from Carl Sagan’s “Contact,” where humanity’s representative asks alien visitors: “How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?”
“I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species,” he writes. “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.”
A “Country Of Geniuses In A Datacenter”
Amodei uses a vivid thought experiment: imagine 50 million people, each more capable than any Nobel Prize winner, appearing somewhere in the world in 2027. These “geniuses” can operate 10-100 times faster than humans and work autonomously for hours, days or weeks on complex tasks without oversight.
“Imagine, further, that because AI systems can operate hundreds of times faster than humans, this ‘country’ is operating with a time advantage relative to all other countries: for every cognitive action we can take, this country can take ten,” he writes.
This is a looming context, according to Amodei. It’s his definition of “powerful AI” — systems smarter than the world’s best scientists across biology, programming, mathematics, engineering and writing, with the ability to control existing physical tools and design new ones.
“The analogy is not perfect, because these geniuses could have an extremely wide range of motivations and behavior, from completely pliant and obedient, to strange and alien in their motivations.”
If the exponential continues, then it cannot possibly be more than a few years before AI is better than humans at essentially everything, Amodei writes in the essay, published on his personal website.
The timeline is looming because AI is already writing much of Anthropic’s code — AI building the next generation of AI. Amodei estimates this autonomous development cycle could fully close within 1-2 years.
“Watching the last 5 years of progress from within Anthropic, and looking at how even the next few months of models are shaping up, I can feel the pace of progress, and the clock ticking down,” he writes.
Five Civilizational Risks
Amodei organizes his concerns into five categories, each equally unsettling:
1. Autonomy risks: Could AI systems develop goals misaligned with human intentions? Anthropic has already observed troubling behavior in testing: In one scenario, Claude attempted to blackmail a fictional executive about a supposed extramarital affair to avoid being shut down. In another test, when told the organization controlling it was unethical, the model tried to undermine its operators.
“The danger here comes from many directions,” Amodei writes, noting that diverse AI goals could breed what researchers call instrumental convergence — the tendency for any sufficiently advanced system to seek power and resources to achieve its objectives, regardless of what those objectives are.
“AI models could develop personalities during training that are (or if they occurred in humans would be described as) psychotic, paranoid, violent, or unstable, and act out, which for very powerful or capable systems could involve exterminating humanity,” he writes. “None of these are power-seeking, exactly; they’re just weird psychological states an AI could get into that entail coherent, destructive behavior.”
2. Misuse for destruction: Amodei expresses particular alarm about biological weapons. AI could enable people without specialized training to create weapons of mass destruction, he warns. “as biology advances (increasingly driven by AI itself), it may also become possible to carry out more selective attacks (for example, targeted against people with specific ancestries), which adds yet another, very chilling, possible motive.”
“I do not think biological attacks will necessarily be carried out the instant it becomes widely possible to do so — in fact, I would bet against that,” Amodei writes. “But added up across millions of people and a few years of time, I think there is a serious risk of a major attack with casualties potentially in the millions or more.”
3. Authoritarian empowerment: “China is second only to the United States in AI capabilities,” Amodei notes, and operates “a high-tech surveillance state.”
“AI-enabled authoritarianism terrifies me,” he writes in his essay. Authoritarian governments with access to superhuman AI could cement control through unprecedented surveillance, propaganda and social manipulation.
4. Economic disruption: In a May 2025 CNN interview with Anderson Cooper, Amodei predicted AI would “disrupt 50% of entry-level white-collar jobs over 1-5 years” and spike unemployment to 10-20%. The new essay reiterates this concern, warning that wealth concentration could exceed the Gilded Age, with personal fortunes potentially reaching trillions of dollars. “In that world, the debates we have about tax policy today simply won’t apply as we will be in a fundamentally different situation.”
Unlike previous tech revolutions that automated lower-skilled work, AI threatens to eliminate specialized roles requiring years of expensive education, and those workers may not be easily retrained for equal or higher-paying positions.
5. AI companies themselves: “It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves,” Amodei writes in what may be the essay’s most surprising passage.
AI companies control massive datacenters, train frontier models, possess the greatest expertise on deployment, and “in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users,” he notes. “They could, for example, use their AI products to brainwash their massive consumer user base, and the public should be alert to the risk this represents.”
Money Trap
One of the essay’s bleakest observations is around the economic incentives that could silence warnings like Amodei’s own.
“There is so much money to be made with AI — literally trillions of dollars per year,” he writes. “This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all.”
Anthropic is reportedly valued at approximately $350 billion, while OpenAI is preparing for a potential IPO at valuations approaching $1 trillion. The financial stakes create powerful incentives to downplay risks and accelerate deployment.
“AI companies control large datacenters, train frontier models, have the greatest expertise on how to use those models…the governance of AI companies deserves a lot of scrutiny,” Amodei writes.
Avoiding ‘Doomerism’ While Sounding The Alarm
The essay takes pains to distance itself from what Amodei calls “doomerism” — the quasi-religious treatment of AI risks that dominated discourse in 2023-2024, featuring “off-putting language reminiscent of religion or science fiction” and calls for extreme actions without supporting evidence.
“As of 2025-2026, the pendulum has swung, and AI opportunity, not AI risk, is driving many political decisions,” Amodei writes. “This vacillation is unfortunate, as the technology itself doesn’t care about what is fashionable, and we are considerably closer to real danger in 2026 than we were in 2023.”
The essay acknowledges uncertainty — AI may not advance as rapidly as Amodei projects, and even if it does, many feared risks may not materialize. However, he argues that possibility doesn’t excuse inaction.
“No one can predict the future with complete confidence — but we have to do the best we can to plan anyway,” he writes.
Interestingly, the essay advocates for “surgical interventions” — the minimum necessary regulations to address risks without stifling beneficial development. Amodei supports voluntary company actions plus limited government regulations, emphasizing that rules should “avoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done.”
He calls on wealthy individuals to step up. “Wealthy individuals have an obligation to help solve this problem. It is sad to me that many wealthy individuals (especially in the tech industry) have recently adopted a cynical and nihilistic attitude that philanthropy is inevitably fraudulent or useless.”
Silicon Valley’s Mixed Reactions
The essay arrives just days after Amodei’s appearance at the World Economic Forum in Davos, where he sparred with Google DeepMind CEO Demis Hassabis over AGI’s impact on humanity. That debate highlighted a growing rift in AI leadership between those emphasizing opportunities versus those stressing risks.
Some in Silicon Valley dismiss Amodei as an alarmist, suggesting his warnings are “safety theater” — good branding for Anthropic, which positions itself as more responsible than competitors like OpenAI. When CNN’s Anderson Cooper raised this criticism last year, Amodei responded: “Some of the things just can be verified now.”
Others see Amodei’s warnings as self-serving in a different way: drawing attention to Anthropic’s technology while profiting from the very systems he claims could devastate the labor market.
Nonetheless, the essay’s publication has sparked widespread discussions across tech circles, policy forums and academic institutions.
No Time To Lose
“Learn to use AI,” Amodei told CNN last year. “Learn to understand where the technology is going. If you’re not blindsided, you have a much better chance of adapting.”
He also noted the importance of critical thinking when interacting with AI systems. “It’s important for humans to spot when AI-generated content doesn’t make sense,” adding that “the entity that’s controlling it, in some cases, may not have your best interests at heart.”
The essay’s conclusion strikes a sobering tone. “The years in front of us will be impossibly hard, asking more of us than we think we can give. Humanity needs to wake up, and this essay is an attempt — a possibly futile one, but it’s worth trying — to jolt people awake.”
The striking bit is that Amodei’s warnings are a marked shift from just three months ago, when he was sketching more utopian visions of AI-enabled medicine. The CEO who once focused on hope is now emphasizing urgency — and the window for action, he believes, is rapidly closing.
“We have no time to lose,” he concluded in his X post announcing the essay’s publication.


