Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
Macquarie bets impact investing can fill an Asian finance gap

Macquarie bets impact investing can fill an Asian finance gap

1 April 2026
Trump will address the nation on Wednesday on the Iran war Wednesday—here’s what to expect

Trump will address the nation on Wednesday on the Iran war Wednesday—here’s what to expect

1 April 2026
Tiger Woods says he’ll seek treatment for substance abuse after another DUI arrest

Tiger Woods says he’ll seek treatment for substance abuse after another DUI arrest

1 April 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » As Davos & India Celebrated AI, Paris Sounded The Alarm On AI Safety
Innovation

As Davos & India Celebrated AI, Paris Sounded The Alarm On AI Safety

Press RoomBy Press Room28 February 202611 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
As Davos & India Celebrated AI, Paris Sounded The Alarm On AI Safety

The most important AI governance meeting of 2026 was not in Davos. It was not in Mumbai. It happened in Paris, in a room most business leaders have never heard of, and what was said there should have landed on every board agenda going forward.

While executives networked in Switzerland and applauded India’s AI ambitions, more than 800 researchers from 65 countries gathered at UNESCO House for IASEAI’26, the second annual conference of the International Association for Safe and Ethical AI. They were not there to celebrate. They were there to name, precisely and on the record, what is already going wrong, and why the governance responses being proposed may not be adequate for the problem they are meant to solve.

What followed was the most honest three days in AI this year.

What Is IASEAI And Why Does It Matter?

IASEAI was born from the 2023 AI Safety Summit at Bletchley Park, England when a steering committee that included many of the leading minds in AI including Stuart Russell, Geoffrey Hinton, Yoshua Bengio, Max Tegmark, and Kate Crawford concluded the world needed a permanent institution, not occasional summits, to pace AI development alongside capability growth. Mark Nitzberg, a researcher and AI governance architect who had spent years arguing that safety work required institutional infrastructure rather than conference declarations, became a co-founder and now serves as Interim Executive Director. IASEAI formally incorporated as a nonprofit in November 2024.

The choice of Nitzberg as the organization’s operational leader is itself a signal about what IASEAI is trying to be. He is not a celebrity researcher. He is a builder, someone focused on the unglamorous work of turning expert consensus into durable institutions. His background spans AI research, policy, and the organizational design questions that most safety conversations skip over entirely: who decides, who verifies, who is accountable when something goes wrong, and what mechanisms exist to enforce any of it.

IASEAI’s inaugural 2025 conference, held at OECD headquarters, produced a ten-point Call to Action urging binding safety standards and global cooperation. The return to UNESCO headquarters in 2026 signaled both growing institutional legitimacy and escalating urgency.

Three weeks before the conference, the Second International AI Safety Report, led by Turing Award winner Yoshua Bengio and authored by more than 100 experts from over 30 countries, confirmed the central paradox driving Nitzberg’s work: general-purpose AI can now solve graduate-level mathematics, write production-grade software, and design proteins. Yet the same models still hallucinate, and reasoning breaks down across multi-step processes. Powerful enough to transform industries. Unreliable enough to cause catastrophic failures. That paradox defined three days of debate.

What Are The Biggest AI Safety Risks In 2026?

The conference organized sessions around themes that read like a threat assessment: alignment and value learning, agentic safety, control and containment, AI in warfare, interpretability, and the future of work. The inclusion of “agentic safety” as a standalone track was the most significant signal. As AI systems evolve from chatbots into autonomous agents that browse the web, execute code, make purchases, and coordinate with other agents, the safety challenge changes fundamentally. It is no longer about filtering offensive outputs. It is about preventing systems that act in the world from acting in ways we did not intend and cannot reverse.

That shift is not hypothetical. In February 2026 alone, OpenAI recruited the founder of OpenClaw Peter Steinberger to accelerate personal AI agents, Alibaba launched Qwen3.5 explicitly for the “agentic AI era,” and DeepSeek V4 arrived as the expected sequel to the January 2025 release that erased $600 billion from Nvidia’s market cap in a single day. The AI arms race is no longer between companies. It is between nations. And the agents being deployed grow more autonomous by the week.

What Did The IASEAI Researchers Actually Find?

Stuart Russell opened the conference by advancing his framework for “provably beneficial AI,” systems designed with built-in uncertainty about human objectives so they defer to human judgment rather than pursue fixed goals without constraint. His metaphor was stark: humanity’s current AI trajectory resembles “everyone in the world getting onto a brand-new kind of airplane that has never been tested before. It’s going to take off, and it’s never going to land.” The difference between 2025 and 2026, he noted, is that the plane is now accelerating.

Geoffrey Hinton compared the moment to early climate change disputes, where scientific consensus eventually emerged, but only after years of costly inaction. The cost of delay is not linear. It compounds alongside the capability curve.

The most commercially urgent finding came from Matija Franklin, a Google DeepMind researcher whose work on AI manipulation was incorporated into the EU AI Act. His paper on “Virtual Agent Economies” documents a trajectory toward a vast, spontaneous agent economy where autonomous AI systems already transact, negotiate, and coordinate at scales and speeds beyond human oversight. No one designed it. No one governs it. And no major company has fully examined what it means when their AI agent makes a commitment their legal team would never have approved. His DeepMind co-author Iason Gabriel, named by TIME as one of the 100 most influential people in AI along with Stuart Russell, extended that analysis to the manipulation and inequality risks that emerge when millions of AI assistants interact with each other on behalf of users across systems that no governance framework was built to address.

The most sobering session came from Zuzanna Wojciak of WITNESS, the human rights organization that defends the evidentiary value of authentic documentation. Her point cut through every technical debate in the room: deepfakes are not primarily a technology problem. They are an evidence problem. When perpetrators can dismiss authentic footage of human rights abuses as AI-generated, and when detection tools fail on non-facial content like conflict zones, the infrastructure of accountability itself is under attack. That argument reaches well beyond human rights documentation. It extends into courtrooms, boardrooms, and every organization that relies on verified information to make decisions.

Has The United States Abandoned AI Safety Leadership?

The question was not stated from the main stage at IASEAI’26. It did not need to be. The answer was visible in the empty seats.

The United States sent no meaningful delegation. While European institutions, Asian governments, and civil society organizations from more than 65 countries debated binding safety frameworks and whistleblower protections, Washington was largely absent. Conference participants who spoke privately described the US posture as a strategic choice, not a scheduling conflict.

That choice is legible in the policy record. The Biden administration’s 2023 executive order on AI safety established voluntary commitments from leading developers. The current administration revoked it in January 2025, replacing it with a framework explicitly prioritizing “American AI dominance” over safety coordination. A December 2025 executive order then proposed preempting state AI laws, creating what constitutional scholars describe as a deliberate vacuum: federal law too weak to constrain the industry, state law too fragmented to fill the gap, and international frameworks dismissed as constraints on competitiveness.

Representatives from allied nations were notably direct in off-record conversations: they are building safety frameworks without the United States, and they expect those frameworks to become the de facto global standard by sheer market weight, regardless of what Washington does. Nitzberg, who has engaged with policymakers across multiple governments in his work building IASEAI, has argued consistently that governance gaps created by one major power do not stay empty. They get filled by whoever shows up.

When The Governance System Held. Once. By Luck.

The sharpest evidence of where US AI policy actually stands came from Anthropic. CEO Dario Amodei recently disclosed that the Department of War had demanded removal of two specific safeguards from Claude as a condition of continued government contracts: the capability to enable mass domestic surveillance and the capability to power fully autonomous weapons without human oversight. Anthropic refused. President Trump has now banned Anthropic from use in government systems and the Pentagon will designate Anthropic a “supply chain risk,” a label previously reserved for adversary nations, while simultaneously calling Claude essential to national security.

The governance system held once, for one company, under unusual circumstances. Anthropic could refuse because it had the financial runway, the public profile, and the founding mission to absorb the political cost. Most AI companies operating on government contracts have none of those things. The question that went largely unasked in press coverage: how many companies received similar demands and complied, quietly, because refusing was not a viable option?

This is the pattern that US absence from IASEAI makes visible. The country that built the most capable AI systems in the world has chosen to govern them primarily through pressure and procurement rather than frameworks and accountability. That is not a governance system. That is luck. And it is the kind of luck that does not repeat reliably across an entire industry.

The Job Cuts That Are Actually An AI Safety Story

The same week IASEAI concluded, Block announced the layoff of 4,000 employees, nearly half its workforce. Jack Dorsey’s shareholder letter was direct: AI automation over headcount. Block’s stock rose 24%. Dorsey predicted most companies would follow within a year.

This is not a jobs story. It is an AI safety story the safety community has been slow to claim. When AI systematically decouples capital from labor, employment-linked tax revenues contract, consumer demand erodes, and social stability degrades faster than any safety net was designed to respond. Unlike previous technological disruptions, the speed and concentration of AI displacement is outpacing the historical pattern of new work creation. The question for board directors is not whether their industry reaches the Block conclusion. It is what happens to their customer base when their customers’ industries get there first.

What Is The Best AI Governance Framework Available Right Now?

The most actionable idea from IASEAI came from Gillian Hadfield, Bloomberg Distinguished Professor of AI Alignment and Governance at Johns Hopkins, and the team at Fathom led by Bri Treece. Their Independent Oversight Marketplace for AI, built around Independent Verification Organizations, is the most practical governance framework currently in circulation. Expert-led IVOs verify AI safety standards. State governments authorize the marketplace. Companies that earn certification gain a credible trust signal. The framework moves at the pace of innovation rather than legislation.

It has one structural flaw that honest advocates, including Nitzberg, have been willing to name directly. Voluntary certification creates adverse selection. The organizations most eager to seek verification are, almost by definition, not the organizations most likely to be the problem. Without embedding IVO certification into procurement requirements, liability exposure, or insurance pricing, the framework risks becoming a trust signal for organizations that never needed the signal in the first place.

The path forward is straightforward but requires saying it plainly: voluntary governance is the bridge to mandatory governance. Companies that build toward IVO certification now will have structural advantages when the mandate arrives. Boards that treat AI safety as a compliance cost today are making the same mistake organizations made when they treated cybersecurity as an IT expense in 2010. Customers, regulators, and talent are all asking the same question: can we trust your AI?

The Alpha Institute for AI Governance is one IVO built specifically for the boardroom, helping directors assess and verify AI governance maturity at the organizational level. As autonomous agents begin transacting on behalf of companies in ways no current legal framework anticipated, independent verification is a fiduciary question, not a reputational one.

Four Questions Every Board Must Answer Before The Next Meeting

As one IASEAI participant put it during the final workshop: “We are building the plane, flying it, and writing the safety manual simultaneously. The question is whether we finish the manual before the turbulence gets worse.”

The turbulence is already here. Four questions cut to what matters.

Can your board define “safe AI” in technical terms rather than compliance terms, in a single sentence it wrote itself? If not, you are governing a system you have not defined.

Where are autonomous AI agents making or influencing decisions on behalf of your organization right now, without human review before those decisions have consequences? Not in theory. Right now.

Which of your AI vendors would have complied with the Pentagon’s demand if they lacked Anthropic’s profile and resolve? Do you know your vendors’ safety commitments well enough to answer that?

When the Block workforce thesis reaches your industry, what happens to the customers of every competitor that makes the same decision? Have you modeled the demand destruction on the other side of your cost savings?

The organizations that navigate what comes next will not be the ones that moved fastest or the ones that moved most cautiously. They will be the ones that knew precisely what they were building, what it was capable of doing without them, and what they were responsible for when it did.

The manual is not finished. The plane is already in the air. The only question that matters now is who is writing the next page.

AI H2s International AI Safety Report Stuart Russell UNESCO
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

1 Habit Emotionally Intelligent Adults Had As Kids, By A Psychologist

1 Habit Emotionally Intelligent Adults Had As Kids, By A Psychologist

1 April 2026
The Graveyard Of OpenAI’s Dead Products And Incomplete Deals

The Graveyard Of OpenAI’s Dead Products And Incomplete Deals

1 April 2026
How The Children’s Movie “Cars” Forewarns A Post-Human Era

How The Children’s Movie “Cars” Forewarns A Post-Human Era

1 April 2026
Inside The New Deal Pipelines Female Founders Are Quietly Building

Inside The New Deal Pipelines Female Founders Are Quietly Building

1 April 2026
Apple Did The Unthinkable With Its 9 MacBook Neo

Apple Did The Unthinkable With Its $599 MacBook Neo

1 April 2026
Multimodal Fusion Used In Self-Driving Cars Is Uplifting AI That Provides Mental Health Guidance

Multimodal Fusion Used In Self-Driving Cars Is Uplifting AI That Provides Mental Health Guidance

1 April 2026
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

6 February 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Warren Buffett revives his charity lunch auction—with Stephen Curry. His last one raised  million

Warren Buffett revives his charity lunch auction—with Stephen Curry. His last one raised $19 million

1 April 20261 Views
Luigi Mangione’s federal trial has been pushed back to October in killing of UnitedHealthcare CEO

Luigi Mangione’s federal trial has been pushed back to October in killing of UnitedHealthcare CEO

1 April 20261 Views
Hershey is moving back to the original recipe for Reese’s Peanut Butter Cups

Hershey is moving back to the original recipe for Reese’s Peanut Butter Cups

1 April 20260 Views
AI is saving workers up to an hour a day — but 80% of companies aren’t using it yet

AI is saving workers up to an hour a day — but 80% of companies aren’t using it yet

1 April 20261 Views
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
Macquarie bets impact investing can fill an Asian finance gap

Macquarie bets impact investing can fill an Asian finance gap

1 April 2026
Trump will address the nation on Wednesday on the Iran war Wednesday—here’s what to expect

Trump will address the nation on Wednesday on the Iran war Wednesday—here’s what to expect

1 April 2026
Tiger Woods says he’ll seek treatment for substance abuse after another DUI arrest

Tiger Woods says he’ll seek treatment for substance abuse after another DUI arrest

1 April 2026
Most Popular
Maps: How Much Have Gas Prices Risen Across The U.S.?

Maps: How Much Have Gas Prices Risen Across The U.S.?

1 April 20260 Views
Warren Buffett revives his charity lunch auction—with Stephen Curry. His last one raised  million

Warren Buffett revives his charity lunch auction—with Stephen Curry. His last one raised $19 million

1 April 20261 Views
Luigi Mangione’s federal trial has been pushed back to October in killing of UnitedHealthcare CEO

Luigi Mangione’s federal trial has been pushed back to October in killing of UnitedHealthcare CEO

1 April 20261 Views
© 2026 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.