Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE IDEA] User data store for IP lists, etc. #275

Open
mattdurant opened this issue Jul 27, 2024 · 8 comments
Open

[FEATURE IDEA] User data store for IP lists, etc. #275

mattdurant opened this issue Jul 27, 2024 · 8 comments
Labels
enhancement New feature or request priority:medium Medium priority ticket

Comments

@mattdurant
Copy link
Contributor

Is your feature request related to a problem? Please describe.
We have the database sink action, but it is intended to sink to external DBs. A way for users to store data local to tracecat, and then query against that in other actions would be really handy for a use case like scheduling a workflow once a day to pull the latest TOR exit nodes, and another workflow for incidents/events that you could compare an IP against that list without pulling the list each time the other workflow ran.

Other good use cases for having this capability:

  • Pulling the ASN database file from MaxMind and doing lookups where your event data does not supply ASN
  • MaxMind GeoIP database for doing geolocation locally
  • TOR entry/exit node lookup
  • Known VPN node lookup
  • Storing the results of a lookup from one of the API integrations, such as pulling a list of very attacked users, or group members from Okta, AD, etc and then comparing against without incurring a lookup every time that workflow runs.
  • "Bad" domain or IP lists pushed to other systems for web filtering, firewalls, etc.

Describe the solution you'd like
2 actions:

  • INSERT/UPDATE/DELETE to a "user data" table in the local tracecat DB
  • SELECT from a "user data" table in the local tracecat DB

Describe alternatives you've considered
Other alternatives:

  • An action to download data to disk and reference it from there, but then the list would have to be loaded, or grep'd each time you wanted to do a lookup which would be inefficient.
  • Do the download every time an incoming webhook runs a workflow where you need to access the data. Very data inefficient and could hit rate limits for accessing those files.

Additional context

@topher-lo
Copy link
Contributor

We have two concepts in the roadmap for August:

  • Events: https://www.tines.com/docs/events/ (ETC: pre-DEFCON)
  • References tables: tables of data you can upload, maintain, and use as mapping tables across your workflows. The use-case was for IP white / black listing, but also great for keeping track of IoCs (ETC: post-DEFCON)

Would you have a mission-critical use-case for this?

@topher-lo topher-lo added the enhancement New feature or request label Jul 27, 2024
@mattdurant
Copy link
Contributor Author

The reference table feature sounds like what I'm looking for, as long as the upload or maintain could happen from within a workflow. For example, if I had a resource like this: https://iptoasn.com/ I would want to have a workflow running once a week to ingest that into the table, and then in other workflows I could use an action to look up an IP in that table, etc.

The use cases that I had in mind are the checking incoming alerts against those lists to determine what I want to do with them. The other use cases fall into the enrichment category (looking up ASN or geolocation when the alert data does not contain it).

Mission-critical? Depends on how you determine that. A pretty standard activity for a SOC analyst is to enrich an IP by determining the location and whether it's a known Tor exit node, VPN service, etc. ASN lookups are handy for blocking in some services like Okta where you can block an entire ASN, not just individual IPs.

If we build some integrations for firewall management, I can see the lists being good for maintaining a whitelist of IPs to never block, or firing off extra workflows if you get an alert for an IP on a monitoring list. Extra workflow in this case would be to your EDR to run a script or gather artifacts, etc. Maybe I'm not blocking it outright, but I want to know when it happens to hang other actions off of it.

@topher-lo
Copy link
Contributor

topher-lo commented Jul 27, 2024

Clarification: by "mission critical", would this be a blocking feature for evaluating / using Tracecat this month and August. We definitely want to add it in (and we plan to in August and release before September) as it's something other folks have requested privately as well.

@topher-lo
Copy link
Contributor

topher-lo commented Jul 27, 2024

If we build some integrations for firewall management, I can see the lists being good for maintaining a whitelist of IPs to never block, or firing off extra workflows if you get an alert for an IP on a monitoring list. Extra workflow in this case would be to your EDR to run a script or gather artifacts, etc. Maybe I'm not blocking it outright, but I want to know when it happens to hang other actions off of it.

💯 it's slow, wasteful, and flaky to have to pull in IoCs lists from your TI source on every workflow. Reference tables are a really nice feature. It is also a necessary feature to build out more "AI-enabled" features (e.g. associating cases and detections with MITRE attack labels, the labels would be stored as reference tables).

@topher-lo topher-lo added the priority:medium Medium priority ticket label Jul 27, 2024
@mattdurant
Copy link
Contributor Author

Clarification: by "mission critical", would this be a blocking feature for evaluating / using Tracecat this month and August. We definitely want to add it in (and we plan to in August and release before September) as it's something other folks have requested privately as well.

Not mission critical by this definition for me. Definitely use cases that we have that would be enabled by these features, but not blocking for me.

@mattdurant
Copy link
Contributor Author

Just wanted to throw a few more use cases on the pile for this one:

  • Syncing group memberships in bulk on a set schedule so that it doesn't have to be queried from API or source system every time you want to check a user's membership
  • Maintaining access lists for webhooks, for example: Teams outgoing webhooks contain the user info on who sent the message, but Teams itself only let's you limit outgoing webhooks to an entire Team, not just that channel, so you could confirm access on a ChatOps workflow by comparing user IDs against a list (that is also synced in once per day from a group, etc)
  • Exclusion logic for a webhook: maintaining a list of users, alert types, etc. that you would want to send down a different workflow path than your standard one (maybe you have 1-2 users that trigger a lot of false positives on an alert, but 99% of your users would be true positives so the workflow has value)

@topher-lo
Copy link
Contributor

topher-lo commented Aug 1, 2024

Huge. Overall thinking: two-way sync embedded into your soar would be extremely useful.

Overtime, we could build in two way sync across different tooling (eg your sentinel one cases, so you don't need to add 2-3 extra actions just for the "update" step post triage and investigation)

@mattdurant
Copy link
Contributor Author

Huge. Overall thinking: two-way sync embedded into your soar would be extremely useful.

Overtime, we could build in two way sync across different tooling (eg your sentinel one cases, so you don't need to add 2-3 extra actions just for the "update" step post triage and investigation)

Yes, I had a SQL/ODBC integration on my list to get the data in/out from other systems, but having it in the "platform" to sync these user tables with other sources would be awesome without having to sprinkle these actions into every workflow

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request priority:medium Medium priority ticket
Projects
None yet
Development

No branches or pull requests

2 participants