Policymakers are scrambling to keep pace with technological advancements in artificial intelligence. The recent release of draft guidelines from the U.S. AI Safety Institute, a newly-created office within the National Institute of Standards and Technology (NIST), are the latest example of government struggling to keep up. Like with so many policies emerging from President Biden’s 2023 Executive Order on AI, the government cure may be worse than the AI disease.
NIST is a well-respected agency known for setting standards across a variety of industries. In its document, “Managing misuse risks in dual-use foundation models,” the agency has proposed a set of seven objectives for managing AI misuse risks. These range from anticipating potential misuse to ensuring transparency in risk management practices. While technically non-binding, NIST guidelines can find their way into binding legislation. For instance, California’s SB 1047 AI legislation references NIST standards, and other states are likely to follow suit.
This is problematic because the proposed guidelines have some significant shortcomings that should be addressed before this document is finalized. A primary concern is the guidelines’ narrow focus on initial developers of foundation models, seemingly overlooking the roles of downstream developers, deployers, and users in managing risks.
This approach places an enormous burden on model developers to anticipate and possibly mitigate every conceivable risk. The guidelines themselves acknowledge the difficulty of this task in the “challenges” section.
The proposed risk measurement framework asks developers to create detailed threat profiles for different actors, estimate the scale and frequency of potential misuse, and assess impacts. These are tasks that even national security agencies struggle to do effectively. This level of analysis for each model iteration could significantly slow down AI development and deployment.
The danger is that these risk analyses will become a lever that regulators use to impose an overly cautious approach to AI development and innovation. We’ve seen similar precautionary logic embedded in environmental policy, such as the National Environmental Policy Act, which has often hindered economic growth and progress.
The guidelines seem to overlook the distributed nature of risk management in AI ecosystems. Different risks are best addressed by different actors at various stages of the AI lifecycle. Some risks can be mitigated by model developers, others by end-users or intermediary companies integrating AI into their products. In some cases, ex-post legal liability regimes might provide the most effective incentives for responsible AI use.
Another critical issue is the potential impact on open-source AI development. The proposed guidelines may be particularly challenging for open-source projects to implement, disadvantaging them compared to closed-source models. This raises broader questions about the relative risks and benefits of open versus closed AI development.
In the context of a hypothetical superintelligent AI, open-source models might indeed create unique and deeply concerning risks. However, at current technology levels, the benefits of open-source AI—including transparency, collaborative improvement, and democratized access—are substantial. Furthermore, an open-source approach to AI development could conceivably lead to more resilient and adaptable systems in the long run, even with superintelligent models, as systems constantly evolve to address new threats. But this will need to be studied in greater detail.
While NIST’s effort to provide guidelines for safe AI development is commendable, the current draft needs refinement. A more balanced approach would consider the roles and responsibilities of various actors throughout the AI value chain. It should provide flexible guidance that can be adapted to different contexts and types of AI systems, rather than a one-size-fits-all approach focused exclusively on initial developers.
NIST should craft guidelines that recognize the diverse players in the AI landscape, from garage startups to tech giants, from end-users to intermediaries. By acknowledging the distributed nature of risk management in AI ecosystems, NIST can create a framework that better addresses safety because it assigns responsibility to those best positioned to manage risks. This revised approach would better reflect the reality of AI development and deployment, where risks and responsibilities are shared across a network of developers, users, and intermediaries.
Ultimately, effective AI governance requires a nuanced understanding of the technology’s lifecycle and the diverse stakeholders involved in its creation and use. NIST’s current approach to risk management lacks this understanding, but with some additional effort, a course correction could be achieved.