Operating at the Speed of the Adversary
- Dr. Darren Death

- 2 minutes ago
- 9 min read
April 2026
Author: Dr. Darren Death, ICIT Fellow
In my previous article titled "AI Vulnerability Discovery and the Case for Systems Security Engineering", I argued that AI-driven vulnerability discovery turns secure system design into an engineering requirement. NIST SP 800-160 established that framework in 2016 and was updated in Revision 1 in 2022. The framework has been available for years, but many organizations did not operationalize systems security engineering in their delivery model because functionality, speed, and legacy architecture pressures dominated implementation decisions. AI-driven vulnerability discovery has changed the risk calculation. Systems that were built to work but not engineered to be secure now carry an active and growing liability.
That article addressed how systems should be built. This one addresses how organizations must operate from where they actually are, which for most is not a clean, well-architected starting point but rather decades of accumulated technical decisions that were never evaluated through a security engineering lens. The threat environment is not waiting for organizations to complete a multi-year architecture transformation. The operational response has to begin now, with the systems, the infrastructure, and the technical debt that currently exist.
This is the second in a series of articles examining the security implications of AI-driven vulnerability discovery. The first addressed the engineering foundation. This one addresses the operational reality.
The Operational Starting Point
Most organizations are not operating from the kind of engineered security architecture that 800-160 describes. They are operating environments assembled over years or decades through implementation decisions that were made to deliver capability, not to preserve a coherent security architecture. Flat network segments remain because they were easier to deploy and maintain. Trust relationships between systems remain broader than they should be because they were never revisited after initial implementation. Internet-facing services persist because accessibility was treated as a default requirement rather than a security decision.
Organizations will need to adopt measures such as zero trust, VulnOps, and AI-augmented defense within environments that were not designed to support them. AI-driven vulnerability discovery and exploitation are advancing at a pace that will not wait for organizations to resolve their technical debt before it is used against them. The speed at which adversaries will operate against these environments is unlike anything the industry has faced, and organizations that delay operational improvements while planning ideal architectures will find that the threat arrived before the plan was complete.
Rethinking Internet Exposure
One of the most consequential and most overlooked operational decisions an organization makes is what it exposes to the internet. For much of the past two decades, the default trajectory has been toward greater connectivity. Remote access, API-driven integrations, publicly accessible management interfaces, and the general expectation that services should be reachable from anywhere have collectively expanded the internet-facing attack surface of most organizations well beyond what their mission requirements actually demand. Cloud and SaaS adoption have accelerated this trend in ways that are not always visible to the organizations involved. Cloud platforms make it remarkably easy to publish services, and the shared responsibility model can create a false sense of security in which organizations assume the platform provider is handling protections that are in fact the customer's responsibility. Misconfigurations, overly permissive default settings, and storage or API endpoints left publicly accessible without deliberate intent are common findings across cloud environments. Compounding this, most organizations have not resourced their security programs to adequately protect the internet-facing footprint they have accumulated. The attack surface has grown incrementally over years, but the staffing, tooling, and monitoring required to defend it have not kept pace.
In an environment where AI can enumerate an organization's external attack surface and probe it for exploitable flaws at machine speed, the cost of unnecessary internet exposure has changed materially. Every service that is accessible from the internet is a service that an adversary's AI can reach, analyze, and attempt to exploit without needing to first establish a foothold inside the network. The decision about what to expose is no longer a convenience or performance question. It is a security architecture decision with direct risk consequences.
Organizations should be evaluating their internet-facing footprint with significantly more rigor than most currently apply. This means asking, for each externally accessible service, whether internet exposure is required by the mission or whether it exists because it was the path of least resistance during deployment. Services that do not need to be internet-facing should not be. The more nuanced problem is services that organizations believe are adequately controlled but are in practice far more exposed than assumed. Enterprise collaboration platforms that are accessible from any device on any network, despite the organization having conditional access and zero trust capabilities available to restrict them. Environments where any endpoint, managed or unmanaged, can download the full contents of a file repository or upload to it without restriction, while the organization reports data loss prevention controls as being in place. These are missing controls that organizations have either not recognized as gaps or have reported as addressed when they are not effectively implemented.
This is not an argument against cloud adoption or against internet-connected services. It is an argument that the decision to expose a service to the internet should be a deliberate, documented security decision with an associated risk assessment, not an inherited default. In the current threat environment, every unnecessary point of internet exposure is an unnecessary point of AI-addressable attack surface.
Accelerating Zero Trust Adoption
Zero trust has been discussed extensively across the industry, and the concept is well understood in principle. The practical challenge is that most organizations have implemented it partially, inconsistently, or not at all. The gap between zero trust as a strategic direction and zero trust as an operational reality is significant in most environments, and the current threat landscape makes closing that gap substantially more urgent.
The core premise of zero trust, that no user, device, or network location should be implicitly trusted, directly addresses the conditions that AI-driven attacks exploit. When an adversary can discover and exploit a vulnerability in hours, the architecture's ability to limit what that exploit achieves determines whether the result is a contained event or a broader compromise. Flat networks with implicit trust allow lateral movement. Over-privileged service accounts provide escalation paths. Systems that authenticate at the perimeter but not between internal components allow an attacker who gains any foothold to traverse the environment with minimal additional effort.
For organizations with significant technical debt, implementing zero trust is not a single initiative. It is a sustained effort to systematically reduce implicit trust across the environment. The practical starting points are well established. Network segmentation limits the blast radius of any single compromise. Identity-based access controls, applied consistently to both human users and service accounts, replace location-based trust assumptions. Phishing-resistant multi-factor authentication reduces the effectiveness of credential-based attacks. Egress filtering constrains the ability of compromised systems to communicate with external command infrastructure. Privilege reduction across service accounts and administrative access limits what an attacker can do after initial access.
None of these measures are new. What has changed is the urgency. When vulnerability discovery and exploitation operate at machine speed, the architectural controls that were always advisable become the controls that determine whether the organization can contain an incident or whether it escalates beyond containment. Organizations that have been treating zero trust as a multi-year roadmap item need to evaluate which elements can be accelerated, because the threat timeline against which they are operating has compressed.
Establishing Vulnerability Operations
VulnOps was described by Heather Adkins, Gadi Evron, and Bruce Schneier in an October 2025 CSO essay as an operating model for continuous vulnerability discovery and response in an AI-accelerated environment. The concept is useful because it frames vulnerability discovery as a standing operational capability applied across proprietary code, third-party dependencies, infrastructure configurations, and acquired software, rather than as a periodic assessment activity.
AI systems have demonstrated the ability to accelerate vulnerability identification and exploit development enough that organizations should assume adversaries will increasingly compress discovery-to-exploit timelines. If the organization is not applying that level of analysis to its own systems, adversaries will. The difference between finding issues proactively and responding after external exploitation attempts is the difference between planned remediation and incident response. Traditional vulnerability management processes were not designed for that pace.
Building a VulnOps capability requires several foundational elements. The organization must maintain a current and accurate inventory of its software environment, including all dependencies and third-party components. It must have the infrastructure to run AI-driven analysis against that environment on a continuous basis. It must have the triage discipline to evaluate findings against mission criticality, system reachability, and available containment options rather than treating every finding with equal urgency. And it must have the remediation authority and technical capability to act on findings before they become externally exploitable.
For organizations that are not building significant amounts of custom software, VulnOps still applies. The function extends to evaluating the security posture of acquired software, vendor-provided platforms, and the configuration state of deployed infrastructure. Organizations that primarily consume rather than build software still operate a software environment that is subject to the same AI-driven discovery that affects custom code.
AI Agents as the Defensive Enabler
The operational changes described above, reducing attack surface, implementing zero trust, standing up VulnOps, are not achievable at the required speed through manual effort alone. The volume of systems to inventory, configurations to evaluate, code to analyze, patches to triage, and decisions to make exceeds what security teams can sustain at human pace against a threat environment that operates at machine pace.
AI agents, which are often used as coding tools but can serve many other purposes, are necessary to execute these operations at the required speed. The same AI capabilities that have created the offensive advantage can be directed inward for defensive purposes. Agents can perform security reviews of code and configurations, analyze dependencies for known and novel vulnerabilities, triage findings against organizational context, assist with remediation validation, accelerate incident investigation, and support governance activities including evidence collection and compliance documentation.
The critical point is that agent support for defensive operations is moving from optional experimentation into a practical requirement. Security teams that operate only at human speed will have difficulty sustaining the pace created by AI-driven vulnerability discovery and exploitation. Public demonstrations over the past year have shown compressed timelines for both vulnerability discovery and rapid escalation, which changes what security operations must be able to process and respond to in near real time.
Agent adoption also introduces its own security requirements. Agents operate with privilege, interact with sensitive systems, and introduce a new class of supply chain risk through their tooling, configurations, integrations, and LLM usage. Organizations deploying agents must define scope boundaries, apply access controls consistent with the principle of least privilege, audit agent activity, and evaluate the security of the agentic supply chain, including model providers, tool integrations, and configuration artifacts. Defending your agents is as necessary as deploying them.
Conclusion
The previous article in this series established that the underlying problem is one of engineering in that systems were built to work rather than engineered to be secure. This article addresses the operational reality that most organizations cannot wait for that engineering problem to be resolved before they act. They must defend the environments they have, with the technical debt they carry, against adversaries that are operating at a speed and scale that the industry has not previously encountered.
The operational response requires deliberate decisions about what remains exposed to the internet, acceleration of zero trust implementation, establishment of vulnerability operations, and use of AI where the security workload already exceeds human pace. Managing vulnerabilities is not new. What has changed is the time available to find, validate, and remediate them before they are used.
Organizations that built their environments over decades without security architecture as a governing constraint are not going to re-engineer those environments overnight. They can, however, begin immediate work to reduce unnecessary exposure, limit lateral movement, remove implicit trust, improve the speed of vulnerability discovery, and use AI to increase defensive throughput. Those changes will not eliminate the underlying architecture problem, but they can reduce the likelihood that existing weaknesses turn into full compromise.
There is more to address. How organizations authorize systems, validate controls with real technical evidence, integrate security into development pipelines, and maintain assurance continuously in production are all dimensions of this challenge that require dedicated attention. This series will continue with a focused examination of these topics and others as organizations work to address the challenges ahead.
This publication is written in the author’s personal capacity. Any views or opinions expressed are solely those of the author.
Dr. Darren Death
Darren Death is an ICIT fellow. He has extensive experience in leading enterprise efforts to secure information systems, protect privacy, and govern the responsible use of AI in alignment with federal mandates and mission-driven priorities.
About ICIT
The Institute for Critical Infrastructure Technology (ICIT) is a nonprofit, nonpartisan, 501(c)3think tank with the mission of modernizing, securing, and making resilient critical infrastructure that provides for people’s foundational needs. ICIT takes no institutional positions on policy matters. Rather than advocate, ICIT is dedicated to being a resource for the organizations and communities that share our mission. By applying a people-centric lens to critical infrastructure research and decision making, our work ensures that modernization and security investments have a lasting, positive impact on society. Learn more at www.icitech.org.
-500x198.png)


