You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now the nvd_override_candidates_from_cve5 script doesn't provide a nice way for multiple people to enrich data. It streams CVEs in a serial manner, outputting them in groupings of MAX_BATCH_SIZE
We need a way that multiple people trying to work on this data don't trample on the work of each other
The text was updated successfully, but these errors were encountered:
I think perhaps we could split the output per assigner and date into smaller groups (maybe something like 5) to actually generate smaller reviewable draft PRs
so we'd end up with a bunch of PRs like apache/{created date}/batch1, apache/{created date}/batch2, github_m/{created date}/batch1, etc.
The idea would be to pick a PR to start reviewing, assign it to yourself, make adjustments to any of the 5 that need adjusting, delete the _notes section, and submit the PR for review
There are probably some complexities to handle for the next run of the automation if all of the PRS from the previous run haven't been merged or deleted
The downside is that it slows down my current workflow which is just blast through the whole queue for a day at once (there are usually less than 100 with the current tuning). I guess we could scope the created PRS to all of the assigners that I'm not looking at yet then there won't be any collisions and we'd hopefully start to get coverage around the missing areas?
of course eventually no one will need to be contributing to the nvd overrides repo directly but rather to the currently non-existant one with the new annotation format, but in the meantime something like the above might work.
Right now the
nvd_override_candidates_from_cve5
script doesn't provide a nice way for multiple people to enrich data. It streams CVEs in a serial manner, outputting them in groupings of MAX_BATCH_SIZEWe need a way that multiple people trying to work on this data don't trample on the work of each other
The text was updated successfully, but these errors were encountered: