[24.2] Fix various job concurrency limit issues #19824
Merged
+295
−30
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I've added an additional check in job_wrapper.enqueue that only updates jobs below the limit. This should be multi-process / multi-thread safe.
The queries are essentially the same queries that are done in
JobHandler.__check_user_jobs
,JobHandler.__check_destination_jobs
etc, but now it's all in in a single update statement.I suppose performance might be a concern, however we still run through the (cached) checks before we decide to queue the job, so I think the cost is likely minimal. By integrating the limit check in the query i think it should become very unlikely that jobs can bypass limits in a multi handler scenario.
c088f9c fixes a bug where a resubmitted job would cause the cached user_job_count_per_destination / user_job_count values to start at 0.
How to test the changes?
(Select all options that apply)
License