Executing the RMF as an Engineering Discipline, not a Paperwork Exercise
- ICIT Research
- 3h
- 10 min read
Photo Credit: Adobe Stock
This OpEd was originally published in S.C. Media.
October 27, 2025
Author: Dr. Darren Death, ICIT Fellow
The Risk Management Framework is intended to align engineering, operations, and governance with measurable control performance. In practice, some agencies have adapted it into an administrative process focused on documentation rather than active demonstration of protection.
The structure of RMF remains unchanged; however, its implementation has shifted toward treating each step as a documentation checkpoint rather than a technical validation. The framework manages risk across the system lifecycle by linking security requirements to architectural design, implementation practices, and operational telemetry.
When applied as designed, RMF supports ongoing authorization based on engineering evidence instead of periodic compliance reporting.
Agencies often regard RMF as an adjunct to engineering processes rather than integrating it as a foundational component. Categorization becomes an administrative classification exercise. Control selection often degrades into checklist generation detached from system design constraints. Assessments document controls but may overlook their performance under operational load.
This approach prevents the integration of shifted-left security and operational telemetry that would give engineering, operations, and authorization officials a shared, continuous understanding of the system’s real security posture. RMF does not prescribe this. The framework was designed to operate within engineering and operational processes, but organizational culture shifted it into a separate compliance function over time.
Restoring RMF to engineering practice
NIST SP 800-37 requires RMF tasks to be executed concurrently with SDLC processes, integrating security and privacy into system development rather than treating them as separate activities. Each development phase should generate security evidence as part of standard engineering output. Requirements analysis must define both functional behavior and security properties with equal specificity and testability.
System design translates those requirements into architectural mechanisms that implement protection through technical controls rather than administrative procedures. Implementation integrates security functionality into code, infrastructure configurations, and data flows using the same development and testing rigor applied to mission features. Verification extends these same automated testing practices into the operational baseline so that the validation logic used in development becomes the foundation for continuous monitoring in production.
This continuity ensures that control performance measured during testing is continuously verified through operational telemetry once systems are deployed, maintaining traceability between engineering evidence and live system assurance.
When RMF operates within engineering practice, categorization drives architecture decisions from the beginning. A system categorized as high impact for confidentiality requires architectural isolation, encryption at rest and in transit, and access controls that enforce least privilege through technical mechanisms rather than policy documents.
These requirements shape infrastructure design, network segmentation, identity architecture, and data protection strategies before implementation begins. Control selection becomes a technical specification exercise where each selected control maps to specific architectural components, configuration baselines, or code implementations that can be tested and verified.
Assessments operate as continuous validation embedded in build and deployment workflows. Security functionality undergoes the same automated testing rigor applied to performance and availability. Code analysis tools verify that implemented logic, interfaces, and data handling conform to defined security policies and development standards.
Configuration management systems validate that deployed infrastructure matches approved baselines. Container and artifact scanning validate that all packaged components, dependencies, and images are free of known vulnerabilities and conform to approved security baselines before deployment. These validation steps generate evidence that security controls function as specified under operational conditions. When assessments operate this way, authorization decisions rest on verified system behavior and operational telemetry rather than documentation assertions about intended behavior.
This integration of automated testing and operational telemetry ensures that the validation performed during development directly supports ongoing authorization and continuous monitoring once systems are in production.
Engineering security into system architecture
NIST SP 800-160 defines systems security engineering as the application of security principles and technical methods within standard systems engineering activities. Security is established as a property of the system itself, achieved through disciplined engineering practices applied consistently across the system’s lifecycle. A secure system demonstrates protection through measurable operational behavior. Controls are implemented through architectural mechanisms that enforce system integrity, manage access, protect data in storage and transit, maintain configuration consistency, and generate telemetry necessary for analysis and investigation.
Security must be defined within the architecture baseline and revalidated with every change that alters system behavior or configuration. Systems that depend on shared infrastructure must inherit both protection mechanisms and the evidence those mechanisms generate. Authentication services, centralized logging platforms, and enterprise configuration management should be implemented once at the infrastructure layer and consumed by applications through secured interfaces.
This inheritance model reduces implementation complexity, ensures consistent control application across systems, and concentrates telemetry generation in platforms designed for continuous monitoring. By aligning the verification logic used during development with operational telemetry, these enterprise services extend automated validation into runtime environments, maintaining control assurance without duplication of testing effort.
Within RMF, control inheritance allows authorization decisions to scale across systems that rely on common enterprise services. When shared infrastructure enforces controls that have verified performance and security functionality and are subject to continuous monitoring, dependent systems inherit those safeguards as part of their authorization boundary.
The authorization boundary includes both the system and the inherited services it depends on. This approach converts shared infrastructure into an assurance platform where control verification occurs once and supports multiple authorization decisions. Architecture diagrams must document these inheritance relationships explicitly, showing which controls are implemented locally and which are consumed from enterprise services.
This shared telemetry-driven model ensures that each authorization decision reflects the combined evidence from both system and enterprise layers.
When the foundational controls are engineered into the architecture and validated through shared infrastructure, future development can focus on business and mission functionality rather than rebuilding or revalidating the security foundation. This separation of concerns enables modular system evolution, where components can be updated or replaced without disrupting inherited protections or compromising authorization continuity.
Each module operates within a defined security envelope established by the enterprise layer, allowing innovation and change to proceed within controlled boundaries while maintaining the integrity of the overall system architecture.
Operational telemetry as evidence
RMF Step 6 defines continuous monitoring as the ongoing assessment of control effectiveness through automated data collection and analysis. In a unified DevSecOps and SecOps model, telemetry is not generated solely by security operations but by the systems and pipelines that create, deploy, and manage those controls. The system’s normal operation generates telemetry that shows whether its functions and safeguards perform as designed. That telemetry should drive authorization decisions directly rather than being summarized into periodic compliance reports. When agencies recreate this information manually through documentation exercises, they introduce latency, interpretation errors, and gaps between what systems do and what governance sees.
Establishing a telemetry foundation through shared infrastructure does not eliminate the need for continued engineering within individual systems. While enterprise monitoring provides visibility into baseline operations, each system must still generate telemetry that reflects its unique business and mission logic. Security-relevant data such as control outcomes, transaction integrity, and application behavior often require custom instrumentation. Without this engineering discipline, authorization decisions risk being based on partial information, where the necessary telemetry is not collected, leading to gaps in visibility and uncertainty about whether the system is operating as intended.
When telemetry indicates a control degradation, the system’s authorization status should update automatically to reflect that operational state. A degraded state signals that remediation is required but does not necessarily mean the system must be taken offline.
The authorization decision depends on the control's function, the severity of degradation, and the risk that degradation introduces. Once the underlying issue is resolved and telemetry confirms restoration, authorization returns to normal.
This model eliminates the lag between control failure and governance awareness. The authorizing official sees the same data that operators and engineers rely on for daily decision-making.
When a critical control fails, governance becomes aware simultaneously with operations, enabling coordinated response and continuous understanding of real security posture.
Shifting security fully left in development
Continuous authorization starts within development pipelines, where security is verified before any code, configuration, or container advances to production. RMF Steps 4 and 5, assessment and authorization, must operate as automated gates within build and deployment workflows.
Each pipeline execution evaluates code against secure coding standards, scans dependencies for known vulnerabilities, validates container images against approved baselines, and tests configurations against defined policies. This embeds verification early, ensuring that every artifact entering production already meets defined security expectations.
The automated test results produced here become the same evidence stream that later supports runtime monitoring, maintaining a single assurance thread from development through operations.
These checks enforce thresholds defined during authorization. A policy that prohibits critical vulnerabilities in production environments becomes an automated rule that halts pipeline execution when a critical vulnerability is detected.
Configuration drift limits are enforced through automated comparisons between deployed configurations and approved baselines. When a system fails these conditions, the pipeline prevents promotion to the next environment, blocking unsafe states from reaching production.
This approach converts authorization into a continuous engineering event rather than a milestone that occurs once before initial deployment. Each successful build represents a verified control state where automated tests confirmed that security requirements are satisfied.
Each failed test represents a risk condition that must be resolved before the system can advance. Once the system reaches production, the same validation logic continues through runtime monitoring.
Operational telemetry confirms that controls tested during development remain effective under production workloads. This telemetry-based validation continues into runtime through the same automation and monitoring frameworks used during development.
The thresholds established in pipelines remain active after deployment, maintaining authorization assurance as the system operates. This unified approach connects development, operations, and governance through shared, real-time evidence of control performance.
Governance through operational data
When authorization decisions are based on operational telemetry, governance becomes a continuous function of system operation rather than a periodic audit event. The same systems that measure performance, capacity, and availability also measure control effectiveness. Dashboards present a unified view of system health that includes both mission functionality and security posture. Decision-makers interpret data through a single operational context rather than translating between disconnected technical and compliance reporting chains.
This alignment closes the gap between compliance and operations, where separate teams often relied on different data and reached conflicting views of system state. Operational teams observe vulnerabilities and configuration changes through live monitoring, while compliance teams have traditionally depended on documentation or scheduled assessments that quickly become outdated. Authorization officials working from these static summaries lack visibility into current conditions. When governance, engineering, and operations rely on the same telemetry, they share a unified understanding of security posture and risk, ensuring that oversight reflects how the system actually performs.
Governance decisions gain precision and timeliness when they rely on the same operational evidence that drives engineering and operations. Dashboards display recent vulnerability scan results, patch compliance status, configuration consistency, and access activity across environments. They also present system-specific metrics defined during the design process, allowing decision-makers to evaluate how each system performs against its intended operational and security objectives. Live control status, incident metrics, and system health indicators show how safeguards function in production. Oversight decisions are grounded in current operational data rather than assumptions drawn from incomplete or outdated reports.
Organizational transformation
Reaching this operational state requires change across program management, engineering, operations, and governance. RMF does not exist as a separate compliance activity; it is a framework for aligning these functions through shared evidence and continuous validation. The framework must be applied by engineers who implement controls, developers who write secure code, operators who maintain systems in production, and program managers who allocate resources and make schedule decisions. Shared telemetry and shifted-left integration provide the technical foundation that unites these roles around measurable system protection.
Program and project managers must allocate time and resources for security implementation with the same discipline applied to functional requirements. Security testing cannot be optional work that teams defer when schedules compress. Architecture reviews must include security analysis from initiation so that protection mechanisms shape design decisions before implementation begins. When security work is treated as overhead that can be cut to meet delivery schedules, systems reach production without verified controls and authorization becomes a paperwork exercise disconnected from operational data.
Engineering teams must integrate security professionals who participate in architecture design, requirement analysis, and implementation planning. Security cannot be a separate function that reviews completed work and identifies deficiencies. It must be embedded in design discussions where architectural decisions are made, in code reviews where implementation patterns are established, and in testing where validation confirms that controls function correctly under operational conditions.
Operations teams manage control performance as part of normal system health, tracking it alongside availability, performance, and capacity indicators. Telemetry showing control degradation is handled through the same processes that address performance or resource issues, ensuring consistent visibility and timely response. Security monitoring functions as an integrated element of operational assurance, maintaining resilience through continuous awareness of system behavior.
Authorization status reflects the current state of the system and changes as the system evolves. It is not a milestone achieved at deployment and maintained through documentation reviews, but a continuous state verified through live operational data. Executive dashboards should display this status alongside system health and performance indicators, allowing leadership to see how current control effectiveness, vulnerability posture, and configuration stability affect overall risk. Governance decisions then draw from the same real-time information used by engineers and operators, ensuring that oversight reflects current operational reality rather than historical reports.
This transformation requires cultural change where security is recognized as an engineering discipline rather than compliance overhead. Everyone has a role in protecting the system, but ultimate accountability rests with executives and risk owners. The security requirements are their requirements, they define the conditions under which the organization operates safely and effectively. Leadership must set the tone by treating security performance as a measure of mission success, ensuring that engineering, operations, and governance work toward the same protection outcomes. Teams must understand that security requirements define expected system behavior that must be implemented, tested, and sustained through continuous measurement.
Conclusion
The objective is to use RMF as it was intended, as an engineering framework that links security requirements to system behavior and verifies those relationships continuously. The framework already defines a complete model for continuous authorization. It must be executed as part of engineering and operations rather than as a separate compliance process.
Security is designed into architecture, verified through automated testing before deployment, and validated continuously through operational telemetry once systems reach production. Authorization is maintained through evidence that demonstrates control effectiveness under actual operating conditions rather than administrative reviews of static documentation.
When RMF is applied as designed and integrated into system design, build automation, deployment pipelines, and continuous monitoring, it supports systems that stay authorized through verified operational performance and an understanding of real security posture.
Darren Death, ICIT Fellow serves as the Chief Information Security Officer, Chief Privacy Officer, and Deputy Chief Artificial Intelligence Officer at the Export-Import Bank of the United States. In this role, he leads enterprise efforts to secure information systems, protect privacy, and govern the responsible use of AI in alignment with federal mandates and mission-driven priorities.
About ICIT
The Institute  for Critical Infrastructure Technology (ICIT) is a nonprofit, nonpartisan, 501(c)3think tank with the mission of modernizing, securing, and making resilient critical infrastructure that provides for people’s foundational needs. ICIT takes no institutional positions on policy matters. Rather than advocate, ICIT is dedicated to being a resource for the organizations and communities that share our mission. By applying a people-centric lens to critical infrastructure research and decision making, our work ensures that modernization and security investments have a lasting, positive impact on society.
Learn more at www.icitech.org/.
ICIT CONTACTS:
Â
Parham Eftekhari
Founder and Chairman
Â
Cory Simpson
Chief Executive Officer
-500x198.png)