Skip to content

k-bailey/detection-engineering-maturity-matrix

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 

Repository files navigation

detection-engineering-maturity-matrix

An updated version of this matrix can be found at: detectionengineering.io

Article: https://kyle-bailey.medium.com/detection-engineering-maturity-matrix-f4f3181a5cc7
SANS Blue Team Summit Talk: https://www.youtube.com/watch?v=Dxccs8UDu6w&list=PLs4eo9Tja8biPeb2Wmf2H6-1US5zFjIxW&index=11

Defined Managed Optimized
People
  • Ad-hoc team building/managing detection (i.e. part time IR task)
  • Leadership has basic understanding of detection processes and challenges but limited resources or different priorities may exist
  • SME's in none or very few detection domains (i.e. network, host, etc.)
  • Dedicated individuals performing detection work full time
  • Leadership advocates for detection. The size of the team and resources needed may not be fully understood
  • SME’s on some tools & log sources, informally defined domain ownership
  • Dedicated team with defined SME's for all detection domains (host, network, application, cloud, etc.)
  • Leadership advocates for involvement in detection processes across the org, as well as necessary tools, licensing and staffing
  • Processes
  • Detection strategy and workflow are not well documented or defined
  • Detection quality depends greatly on the understanding of the individual performing the work
  • No backlog or prioritization of known gaps
  • Little or no active maintenance or monitoring of existing detection
  • Little to no detection related metrics
  • Detection strategy and workflow are defined and followed
  • Approval and handoff processes are loosely defined
  • Work is prioritized in an ad-hoc way with little to no input from threat intel or others
  • Maintenance and monitoring is performed but is ad-hoc and generally done reactively
  • Some metrics exist for categories such as fidelity, MTTD, and automated resolutions
  • Detection strategy is continuously iterated on
  • Defined review and approval processes exist for new and updated detection, and IR is given final approval permissions
  • Work is prioritized by input from threat intel, and technology SME's
  • Maintenance and monitoring is continuous and most issues are identified proactively
  • KPI's are well defined to include applicable Mitre Att&ck coverage per environment (i.e. Win, Mac, Corp, Prod, etc.)
  • Technology
  • Visibility is inconsistent and some sources critical for custom detection may be missing
  • Timeliness of log sources is not tracked
  • Little to no detection-as-code principles are followed
  • No alerts are continuously tested to ensure they are functional
  • Most critical log sources are available in the SIEM. Some log health alerting exists.
  • Most log sources are timely (< 5-10 min)
  • Some detection as code principles are followed
  • Few alerts are continuously tested, telemetry from other sources are alerted on (SIEM errors, log health)
  • Detection defines critical log sources and ensures they are present in the SIEM. Log health is tracked and alerted on
  • Detection as code is engrained in the team, version control, review and approval, static, dynamic and continuous testing are baked into the deployment pipeline
  • Almost all detection logic is continuously tested in an automated way
  • Detection
  • Most detection creation is reactive to incidents or near misses
  • Detection is tied loosely to Mitre Att&ck, but there is no formal tracking
  • Threats are emulated by the detection individuals themselves or using historical data, no active red/purple teaming occurs
  • Detection is primarily indicator focused, few behavioral TTP detections exist
  • All detection logic is treated equal in priority
  • All alerts must be manually reviewed and interacted with by the IR team
  • Detection creation is more proactive and prioritized loosely on threat intel (known, likely threats)
  • Mitre Att&ck TTP's a detection use-case covers are documented but aggregation of coverage may be manual
  • Reactive purple team exercises occur, potentially loosely driven by threat intel
  • More behavioral based detections exist, new detection is mostly TTP focused (where possible)
  • Some high fidelity, high impact detection logic runs in near real time and priority is communicated to IR
  • The team has some capability to send alerts to end-users for resolution
  • Detection creation is proactively prioritized based on known and active threats to the org as identified by threat intel with risk input from other teams (i.e Security engineering, architecture, risk)
  • All use-cases are documented with Att&ck TID's and this data can be programmatically retrieved to calculate metrics
  • PurpleTeam exercises are constantly run to validate and improve detection capabilities
  • Focused primarily on behavioral/TTP detection logic. ML based detection is applied where applicable
  • High fidelity, high impact detection logic runs in near real time and priority is effectively presented to the IR team
  • All alerts where the end-user has context are directed to them. New alerts are constantly questioned for the potential for automated/end-user resolution
  • About

    No description, website, or topics provided.

    Resources

    Stars

    Watchers

    Forks

    Releases

    No releases published

    Packages

    No packages published