โ† Writing

Open Weights, Power, and Catastrophic Risk

Open-weight models solve one problem while worsening another. They reduce concentration of power in a handful of companies and governments, but they also increase the chance that dangerous capabilities spread to actors who should never have them, including in CBRN-relevant domains.

The right policy response is not "open everything" or "close everything." It is to build institutions that preserve access and competition where socially valuable, while making catastrophic misuse substantially harder.

Why demand for open weights may increase

As training gets more expensive, fewer organizations can fund frontier runs. At the same time, more companies are building products that depend on model access. If frontier models become selectively available or not available through broad API access, pressure to secure durable access to open models rises.

Mythos-style gatekeeping dynamics make this pressure sharper. Once firms realize a small number of providers can unilaterally restrict access, they look for a fallback they can host, modify, and rely on.

The two strongest arguments for open weights

First, open weights can check corporate power. A small oligopoly of labs can set terms, prices, and access conditions for increasingly general purpose systems. Open models create a fallback path for developers and researchers who would otherwise be structurally dependent on a few API vendors.

Second, open weights can check government power. Model providers are subject to domestic political and regulatory pressure. If states fear foreign providers could restrict or disable access for geopolitical reasons, sovereign compute investments become more attractive. Open weights are appealing in that environment because access cannot be revoked with one policy or one terms-of-service update.

Why open weights still matter for safety

Open release can enable independent safety research when closed labs impose legal or technical barriers on external evaluation. External scrutiny is often where hidden failure modes are surfaced, and some near-frontier open releases have already enabled high-value independent assessments.

In that sense, open weights are not only a market structure issue. They are also part of the accountability ecosystem.

The hard downside: catastrophic misuse risk

Recent capability signals make this risk concrete. Advanced systems have shown very high performance on technical biological knowledge tasks, and internal evaluations at leading labs have reported substantial human performance uplift in sensitive areas.

With closed models, safeguards like classifier layers, policy enforcement, and centralized logging can still block or detect a large fraction of dangerous behavior, even if imperfectly. With open weights, that control boundary disappears after download. Safety layers can be removed, local deployments can run without oversight, and models can be fine-tuned for exactly the capabilities society most needs to constrain.

A practical policy frame: ABC

Governments should treat this as an institutional design challenge. A workable starting point is an ABC framework:

  1. Access controls: tiered access for high-risk model classes, identity verification for sensitive capabilities, and clear restrictions on frontier-weight distribution where misuse potential is unusually high.
  2. Baseline accountability: standardized evaluations, incident reporting, provenance and watermarking norms where feasible, and liability structures that reward responsible release and penalize reckless deployment.
  3. Capacity building: public-interest compute, secure research environments, and funding mechanisms for open safety research so openness does not become synonymous with zero governance.

The goal is to preserve legitimate benefits of open ecosystems while making high-consequence misuse materially harder.

The economic constraint people skip

Open weights without a durable funding model for frontier research may simply lock in closed-lab capability leadership. A consortium model can work only if it solves training-scale financing, governance, and IP pooling problems that have historically broken similar efforts.

In other words, if we want open alternatives to remain credible at the frontier, governance design must include economics, not just licensing ideology.

Conclusion

Open weights are not purely good or purely bad. They are a structural hedge against concentrated power and a structural amplifier of misuse risk. Both are true at the same time.

The question is whether governments can build institutions fast enough to keep the upside while constraining the worst downside. If they can, open-weight ecosystems may remain compatible with safety. If they cannot, capability diffusion will outrun governance, and the costs will be paid by everyone.