The Deferral Trap: Compounding Risk and AI Adoption Governance
- Dr. David Mussington

- 4 hours ago
- 3 min read
April 2026
By David Mussington, Ph.D., CISSP, DDN QTE,
ICIT Fellow, Co-Chair, ICIT FCEB Resilience Center
On April 7, 2026, Anthropic took an unusual step. It published a 244-page system card for
a model it had no intention of releasing to the public. Claude Mythos Preview —
described in internal company documents as the most capable model it had ever
developed — had demonstrated cybersecurity capabilities so advanced that Anthropic
concluded broad release was not responsible. Instead, it launched Project Glasswing: a
gated initiative providing access to a curated set of technology partners, with the
explicit goal of letting defenders get ahead of the model's offensive potential before
equivalent capabilities proliferated to actors less committed to responsible deployment.
Anthropic's frontier red team found that Mythos Preview was capable of identifying and
then exploiting zero-day vulnerabilities in every major operating system and every major
web browser — with many vulnerabilities ten or twenty years old, and with no human
involved in either discovery or exploitation after an initial prompt. In one documented
case, Mythos Preview fully autonomously identified and exploited a 17-year-old remote
code execution vulnerability in FreeBSD, triaged as CVE-2026-4747, that allows anyone to
gain root on a machine running NFS — beginning from an unauthenticated position
anywhere on the internet. Critically, Anthropic did not explicitly train Mythos Preview to
have these capabilities. They emerged as a downstream consequence of general
improvements in code, reasoning, and autonomy — the same improvements that make
the model substantially more effective at patching vulnerabilities also make it
substantially more effective at exploiting them.
The Mythos moment is significant for reasons that extend beyond its immediate
cybersecurity implications. It establishes, with unusual clarity, that AI capability has
crossed a threshold in the offensive domain — and that the defensive and governance
infrastructure available to most institutions has not kept pace. That gap is not a
projected risk. It is a present condition.
This paper argues that the gap has a specific and underappreciated cause: the
systematic underweighting of deferral costs in institutional AI risk calculations.
Organizations and governance bodies that have treated AI skepticism as the default
responsible posture have implicitly assumed a stable baseline — one in which restraint
preserves optionality without accumulating cost. That assumption is wrong. The baseline
is not stable. And in an environment where adversarial AI capability is advancing on
timelines that do not pause for institutional deliberation, skepticism toward AI adoption
does not reduce risk. It compounds it.
The argument proceeds in five sections. Section 1 examines the logical structure of the
inversion problem — why caution, in a deteriorating threat environment, is not cost-free.
Section 2 grounds that claim empirically, drawing on Volt Typhoon, Salt Typhoon, Iranian
ICS operations, and Mythos as four reference points that together characterize the
current baseline. Section 3 develops the compounding risk mechanism — the specif ic
ways in which deferral accumulates risk asymmetrically over time. Section 4 describes
what governed adoption looks like in practice, establishing that the choice between
capability and accountability is not forced. Section 5 draws out the policy implications
for institutions operating in the current environment.
VIEW AND DOWNLOAD THE WHITE PAPER
About ICIT
The Institute for Critical Infrastructure Technology (ICIT) is a nonprofit, nonpartisan, 501(c)3think tank with the mission of modernizing, securing, and making resilient critical infrastructure that provides for people’s foundational needs. ICIT takes no institutional positions on policy matters. Rather than advocate, ICIT is dedicated to being a resource for the organizations and communities that share our mission. By applying a people-centric lens to critical infrastructure research and decision making, our work ensures that modernization and security investments have a lasting, positive impact on society. Learn more at www.icitech.org.
-500x198.png)



