Skip to content

Level 4 – Optimized

At the Optimized level, offensive security evolves into a sustained program that goes beyond identifying vulnerabilities to measuring resilience, validating detection and response, and informing business risk decisions. Because organizations vary in resources and maturity, Level 4 is divided into two sub-levels:

  • 4A (Foundationally Optimized): Represents the minimum practices that demonstrate optimization is underway, with regular red and purple team exercises and initial resilience metrics.

  • 4B (Fully Optimized): Represents full maturity at this stage, where adversary simulations are ongoing, threat intelligence is embedded into scenarios, and results directly inform enterprise risk management.

Level 4A: Foundationally Optimized

Outcomes

  • Annual red and purple team exercises are conducted to validate security controls and testing resilience of SOC/IR functions.

  • Threat-informed adversary scenarios are incorporated, simulating realistic attack paths.

  • Introductory tabletop exercises begin to validate coordination, escalation, and communication processes in response to simulated incidents.

  • Initial resilience metrics (MTTD, MTTR, detection coverage) are collected for critical systems.

  • Findings are remediated and retested to confirm closure.

  • Results are reviewed by security leadership and used to inform remediation priorities.

Actions

  • Develop a documented red/purple team testing plan with defined scope, objectives, and cadence.

  • Partner with external specialists to lead exercises while internal teams participate and learn.

  • Measure and document how quickly attacks are detected and contained.

  • Facilitate tabletop exercises at least annually to evaluate IR coordination and validate communication between SOC, IT, and management teams.

  • Create after-action reports and lessons-learned workshops to ensure improvements are applied.

  • Share results with security leadership and incorporate outcomes into security program planning.

Sustainment Criteria

  • Annual red/purple team exercises occur consistently, with evidence of lessons learned and remediation validated.

  • At least one tabletop exercise is completed annually and reviewed for effectiveness in communication and decision-making.

  • At least one resilience metric is collected for each critical asset class (e.g., endpoints, cloud apps, perimeter).

  • Test results are archived, and year-over-year comparisons demonstrate progress.

  • Security leadership formally reviews results within 30 days of each exercise.

Operational Practices

  • Governance: Security leadership oversees the testing program, ensuring results feed into risk and remediation planning.

  • People: SOC and IR staff are actively engaged in exercises alongside external partners, building internal capability.

  • Process: Post-exercise debriefs and remediation tracking are mandatory steps following each engagement and tabletop exercise.

  • Technology: Basic adversary simulation tools, attack emulation frameworks, and monitoring systems are leveraged during exercises.

Level 4B: Fully Optimized

Outcomes

  • Red and purple team exercises are performed quarterly or in response to significant changes in the business or IT environment.

  • Adversary simulations are designed and executed using live threat intelligence to mirror realistic attacker behaviors.

  • Tabletop exercises evolve into structured cross-functional simulations conducted alongside red/purple team activities.

  • Resilience metrics (MTTD, MTTR, detection coverage, recurring exposure rates) are collected, trended over time, and compared against defined targets.

  • Testing results are incorporated into enterprise risk dashboards reviewed by executives and risk committees.

  • Offensive security outcomes are consistently linked to measurable improvements in detection engineering, SOC performance, and incident response readiness.

Actions

  • Establish a quarterly (or event-driven) testing cadence that includes red/purple team exercises and advanced adversary simulations.

  • Integrate live threat intelligence into scenario design, ensuring testing reflects emerging attacker tactics.

  • Conduct tabletop exercises bi-annually to test IR procedures, executive escalation paths, and decision-making under simulated stress.

  • Translate simulation findings into concrete detection engineering content (e.g., SIEM rules, EDR analytics, log correlation enhancements).

  • Share results with executive stakeholders through risk dashboards, executive briefings, or steering committees.

  • Maintain formalized lessons-learned workshops and assign clear ownership for closing identified detection and response gaps.

Sustainment Criteria

  • Red/purple team exercises and adversary simulations occur at least quarterly or following significant business/technology changes.

  • Tabletop results are documented and correlated with live test findings to identify systemic or process-level gaps.

  • Multiple resilience metrics along with remediation cycle times are consistently collected, benchmarked, and trended to demonstrate progress.

  • Results are integrated into enterprise risk dashboards and reviewed by executive leadership at least quarterly.

  • Evidence exists that testing has directly influenced improvements in SOC operations, IR playbooks, or risk prioritization.

Operational Practices

  • Governance: Offensive security results are tied into enterprise risk registers, with formal oversight from executives or a security steering committee.

  • People: Internal red/purple team capabilities exist, supplemented by trusted external partners for scale or specialized expertise.

  • Process: Findings from adversary simulations and tabletops directly drive SOC tuning, IR enhancements, and risk reporting.

  • Technology: Dedicated adversary simulation platforms, threat intelligence feeds, and telemetry integrations are operational, enabling both manual and automated scenario execution.