Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
March 31 Is World Backup Day, Protect Your Data Now!

March 31 Is World Backup Day, Protect Your Data Now!

31 March 2026
Markets cheer as Trump threatens to abandon the Strait of Hormuz, leaving the oil market in crisis

Markets cheer as Trump threatens to abandon the Strait of Hormuz, leaving the oil market in crisis

31 March 2026
What’s Going On With NotebookLM?

What’s Going On With NotebookLM?

31 March 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » NIST AI Guidelines Misplace Responsibility For Managing Risks
Innovation

NIST AI Guidelines Misplace Responsibility For Managing Risks

Press RoomBy Press Room5 September 20244 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
NIST AI Guidelines Misplace Responsibility For Managing Risks

Policymakers are scrambling to keep pace with technological advancements in artificial intelligence. The recent release of draft guidelines from the U.S. AI Safety Institute, a newly-created office within the National Institute of Standards and Technology (NIST), are the latest example of government struggling to keep up. Like with so many policies emerging from President Biden’s 2023 Executive Order on AI, the government cure may be worse than the AI disease.

NIST is a well-respected agency known for setting standards across a variety of industries. In its document, “Managing misuse risks in dual-use foundation models,” the agency has proposed a set of seven objectives for managing AI misuse risks. These range from anticipating potential misuse to ensuring transparency in risk management practices. While technically non-binding, NIST guidelines can find their way into binding legislation. For instance, California’s SB 1047 AI legislation references NIST standards, and other states are likely to follow suit.

This is problematic because the proposed guidelines have some significant shortcomings that should be addressed before this document is finalized. A primary concern is the guidelines’ narrow focus on initial developers of foundation models, seemingly overlooking the roles of downstream developers, deployers, and users in managing risks.

This approach places an enormous burden on model developers to anticipate and possibly mitigate every conceivable risk. The guidelines themselves acknowledge the difficulty of this task in the “challenges” section.

The proposed risk measurement framework asks developers to create detailed threat profiles for different actors, estimate the scale and frequency of potential misuse, and assess impacts. These are tasks that even national security agencies struggle to do effectively. This level of analysis for each model iteration could significantly slow down AI development and deployment.

The danger is that these risk analyses will become a lever that regulators use to impose an overly cautious approach to AI development and innovation. We’ve seen similar precautionary logic embedded in environmental policy, such as the National Environmental Policy Act, which has often hindered economic growth and progress.

The guidelines seem to overlook the distributed nature of risk management in AI ecosystems. Different risks are best addressed by different actors at various stages of the AI lifecycle. Some risks can be mitigated by model developers, others by end-users or intermediary companies integrating AI into their products. In some cases, ex-post legal liability regimes might provide the most effective incentives for responsible AI use.

Another critical issue is the potential impact on open-source AI development. The proposed guidelines may be particularly challenging for open-source projects to implement, disadvantaging them compared to closed-source models. This raises broader questions about the relative risks and benefits of open versus closed AI development.

In the context of a hypothetical superintelligent AI, open-source models might indeed create unique and deeply concerning risks. However, at current technology levels, the benefits of open-source AI—including transparency, collaborative improvement, and democratized access—are substantial. Furthermore, an open-source approach to AI development could conceivably lead to more resilient and adaptable systems in the long run, even with superintelligent models, as systems constantly evolve to address new threats. But this will need to be studied in greater detail.

While NIST’s effort to provide guidelines for safe AI development is commendable, the current draft needs refinement. A more balanced approach would consider the roles and responsibilities of various actors throughout the AI value chain. It should provide flexible guidance that can be adapted to different contexts and types of AI systems, rather than a one-size-fits-all approach focused exclusively on initial developers.

NIST should craft guidelines that recognize the diverse players in the AI landscape, from garage startups to tech giants, from end-users to intermediaries. By acknowledging the distributed nature of risk management in AI ecosystems, NIST can create a framework that better addresses safety because it assigns responsibility to those best positioned to manage risks. This revised approach would better reflect the reality of AI development and deployment, where risks and responsibilities are shared across a network of developers, users, and intermediaries.

Ultimately, effective AI governance requires a nuanced understanding of the technology’s lifecycle and the diverse stakeholders involved in its creation and use. NIST’s current approach to risk management lacks this understanding, but with some additional effort, a course correction could be achieved.

AI AI Safety Institute Artificial Intelligence Biden Commerce Department Foundation Models Guidance National Institute of Standards and Technology Risk
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

March 31 Is World Backup Day, Protect Your Data Now!

March 31 Is World Backup Day, Protect Your Data Now!

31 March 2026
What’s Going On With NotebookLM?

What’s Going On With NotebookLM?

31 March 2026
Why Your Cold Pitch Emails Aren’t Getting You Brand Deals

Why Your Cold Pitch Emails Aren’t Getting You Brand Deals

31 March 2026
Putin’s War Chest Set To Explode With Iran War, Lifted U.S. Sanctions

Putin’s War Chest Set To Explode With Iran War, Lifted U.S. Sanctions

31 March 2026
TomoCredit Revamps Marketing Claims, Emphasizes Coaching Instead Of Boosting Credit

TomoCredit Revamps Marketing Claims, Emphasizes Coaching Instead Of Boosting Credit

31 March 2026
How Government Attempts To Reduce Health Spending Can Paradoxically Raise Health Costs

How Government Attempts To Reduce Health Spending Can Paradoxically Raise Health Costs

31 March 2026
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

6 February 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Why Your Cold Pitch Emails Aren’t Getting You Brand Deals

Why Your Cold Pitch Emails Aren’t Getting You Brand Deals

31 March 20260 Views
The supervisor class: how AI agents are remaking the developer’s career

The supervisor class: how AI agents are remaking the developer’s career

31 March 20260 Views
Putin’s War Chest Set To Explode With Iran War, Lifted U.S. Sanctions

Putin’s War Chest Set To Explode With Iran War, Lifted U.S. Sanctions

31 March 20261 Views
The ‘death of SaaS’ could be the best thing to ever happen to SaaS M&A

The ‘death of SaaS’ could be the best thing to ever happen to SaaS M&A

31 March 20260 Views
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
March 31 Is World Backup Day, Protect Your Data Now!

March 31 Is World Backup Day, Protect Your Data Now!

31 March 2026
Markets cheer as Trump threatens to abandon the Strait of Hormuz, leaving the oil market in crisis

Markets cheer as Trump threatens to abandon the Strait of Hormuz, leaving the oil market in crisis

31 March 2026
What’s Going On With NotebookLM?

What’s Going On With NotebookLM?

31 March 2026
Most Popular
Democrats in disarray as rank and file clash with Chuck Schumer’s plan to run elderly moderates in must-win races

Democrats in disarray as rank and file clash with Chuck Schumer’s plan to run elderly moderates in must-win races

31 March 20260 Views
Why Your Cold Pitch Emails Aren’t Getting You Brand Deals

Why Your Cold Pitch Emails Aren’t Getting You Brand Deals

31 March 20260 Views
The supervisor class: how AI agents are remaking the developer’s career

The supervisor class: how AI agents are remaking the developer’s career

31 March 20260 Views
© 2026 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.