-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expanding coverage and other metrics in summary.yml #257
Conversation
contentctl/objects/abstract_security_content_objects/detection_abstract.py
Outdated
Show resolved
Hide resolved
contentctl/objects/abstract_security_content_objects/detection_abstract.py
Show resolved
Hide resolved
contentctl/objects/abstract_security_content_objects/detection_abstract.py
Show resolved
Hide resolved
if len(self.tests) == 0: | ||
return False | ||
return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notable change. I think this fine, as we are enforcing the existence of at least one test where it matters (at model build, for all but a handful of edge cases)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree. I will close this out, but this is where I think we COULD do with things like defining different Classes for each type:
or status:
. That's a huge rework, though, and I think this is a good change for what we have.
The comment above this statement
# If no tests are defined, we consider it a success for the detection (this detection was
# skipped for testing). Note that the existence of at least one test is enforced by Pydantic
# validation already, with a few specific exceptions
Also gives good insight into the logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate a little on what you mean by different Classes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean that a Production Detection, an Experimental Detection, and a Deprecated Detection all have different validations/requirements.
Similarly, TTP, Anomaly, Hunting, and Correlation may have different requirements and behaviors.
Having all these requirements and validations and behaviors expressed via complicated IF/ELSE logic is not as clean as making Detection
an abstract class (and making Production/Experimental/Deprecated and/or TTP/Anomaly/Hunting/Correlation non-abstract classes that define behavior specific to their types).
Let me know if we can mark this is RESOLVED
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahhhh I see. I like that idea a lot! And I think it makes a lot of sense to compartmentalize validation requirements like that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leaving this unresolved intentionally to look back to later (not blocking this PR)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes look great, but I owe a followup PR with suggestions. As part of the tests, I ran a "test" of 100 deprecated and 1 production search. Here is a subset of the summary.yml file, which looks great and now captures much more (+ more useful) information. Note the presence of a production search, a deprecated search with a test (which is not RUN because it is status: deprecated) and a deprecated search with NO tests:
summary:
mode: selected
enable_integration_testing: false
success: true
total_detections: 101
total_tested_detections: 1
total_pass: 1
total_fail: 0
total_skipped: 100
total_untested: 0
total_production: 1
total_experimental: 0
total_deprecated: 100
total_manual: 0
total_other_skips: 0
success_rate: 100.0%
tested_detections:
- name: O365 ZAP Activity Detection
type: Anomaly
production_status: production
status: pass
source_category: cloud
data_source:
- O365 Universal Audit Log
search: '`o365_management_activity` Workload=SecurityComplianceCenter Operation=AlertEntityGenerated
Name="*messages containing malicious*" | fromjson Data | stats count min(_time)
as firstTime max(_time) as lastTime values(zu) as url values(zfn) as file_name
values(ms) as subject values(ttr) as result values(tsd) as src_user by AlertId,trc,Operation,Name
| rename Name as signature, AlertId as signature_id, trc as user | eval action
= CASE(match(result,"Success"), "blocked", true(),"allowed"), url = split(url,";")
| `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `o365_zap_activity_detection_filter`'
file_path: detections/cloud/o365_zap_activity_detection.yml
manual_test: null
success: true
tests:
- name: True Positive Test
test_type: unit
success: true
message: TEST PASSED
exception: null
status: pass
duration: 7.94
wait_duration: null
resultCount: '3'
runDuration: '2.082'
- name: True Positive Test
test_type: integration
success: true
message: 'TEST SKIPPED: Skipping all integration tests'
exception: null
status: skip
duration: 0
wait_duration: null
resultCount: null
runDuration: null
skipped_detections:
- name: ASL AWS CreateAccessKey
type: Hunting
production_status: deprecated
status: skip
source_category: deprecated
data_source: []
search: '`amazon_security_lake` api.operation=CreateAccessKey http_request.user_agent!=console.amazonaws.com
api.response.error=null | rename unmapped{}.key as unmapped_key , unmapped{}.value
as unmapped_value | eval keyjoin=mvzip(unmapped_key,unmapped_value) | mvexpand
keyjoin | rex field=keyjoin "^(?<key>[^,]+),(?<value>.*)$" | eval {key} = value
| search responseElements.accessKey.userName = * | rename identity.user.name as
identity_user_name, responseElements.accessKey.userName as responseElements_accessKey_userName
| eval match=if(identity_user_name=responseElements_accessKey_userName,1,0) |
search match=0 | rename identity_user_name as identity.user.name , responseElements_accessKey_userName
as responseElements.accessKey.userName | stats count min(_time) as firstTime max(_time)
as lastTime by responseElements.accessKey.userName api.operation api.service.name
identity.user.account_uid identity.user.credential_uid identity.user.name identity.user.type
identity.user.uid identity.user.uuid http_request.user_agent src_endpoint.ip |
`security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` |`asl_aws_createaccesskey_filter`'
file_path: detections/deprecated/asl_aws_createaccesskey.yml
manual_test: null
success: true
tests:
- name: True Positive Test
test_type: unit
success: true
message: 'TEST SKIPPED: Detection is non-production (deprecated)'
exception: null
status: skip
duration: 0
wait_duration: null
resultCount: null
runDuration: null
- name: True Positive Test
test_type: integration
success: true
message: 'TEST SKIPPED: Skipping all integration tests'
exception: null
status: skip
duration: 0
wait_duration: null
resultCount: null
runDuration: null
- name: Windows hosts file modification
type: TTP
production_status: deprecated
status: skip
source_category: deprecated
data_source:
- Sysmon EventID 11
search: '| tstats `security_content_summariesonly` count min(_time) as firstTime
max(_time) as lastTime FROM datamodel=Endpoint.Filesystem by Filesystem.file_name
Filesystem.file_path Filesystem.dest | `security_content_ctime(lastTime)` | `security_content_ctime(firstTime)`
| search Filesystem.file_name=hosts AND Filesystem.file_path=*Windows\\System32\\*
| `drop_dm_object_name(Filesystem)` | `windows_hosts_file_modification_filter`'
file_path: detections/deprecated/windows_hosts_file_modification.yml
manual_test: null
success: true
tests: []
recommendations
filtering logic to the original location. Added logic to throw an error and description for why baselines throw an error right now.
to test a production search with Baseline(s) that is not marked as manual_test
contentctl/actions/detection_testing/views/DetectionTestingView.py
Outdated
Show resolved
Hide resolved
We should add a validation on |
Responses to Comments
…ontentctl into feature/coverage-report
|
Bump version in prep for release
Final test went well; 2 attack data download errors but everything else looks good! |
Context
Code changes
ManualTest
Testing
develop
ofsecurity_content
Sample
summary.yml