-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split workflow executions in half if request body size is too big #481
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## release/1.15.0 #481 +/- ##
====================================================
- Coverage 56.75% 56.33% -0.42%
- Complexity 300 301 +1
====================================================
Files 31 31
Lines 1947 1965 +18
Branches 165 168 +3
====================================================
+ Hits 1105 1107 +2
- Misses 762 778 +16
Partials 80 80
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Before merging this PR, I will also update the dockstore version to the newest webservice rc version |
numberOfExecutionsSubmitted.addAndGet(workflowMetricRecords.size()); | ||
} catch (ApiException e) { | ||
logSkippedExecutions(sourceUrlTrsInfo.sourceUrl(), workflowMetricRecords, String.format("Could not submit execution metrics to Dockstore for workflow %s: %s", sourceUrlTrsInfo, e.getMessage()), skippedExecutionsCsvPrinter, false); | ||
if (e.getCode() == HttpStatus.SC_REQUEST_TOO_LONG) { | ||
int partitionSize = IntMath.divide(workflowExecutionsToSubmit.size(), 2, RoundingMode.UP); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it ever possible that there could be a single execution that's too large to be submitted? If yes, some code to end the recursion would be good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh good point, it's a possibility. I'll change it so the execution is logged and skipped since there's not much that can be done by the aggregator if it's one big execution. At that point, we'll have to increase our request body size limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking you were going to check the request size before submitting the request, but that would be tricky because the nginx request size limit is presumably the compressed size -- so you'd have to JSONify, stringify, compress for each request.
So this approach makes sense!
I was going to do something similar to that, but Denis suggested this approach which is much easier 😁 |
Quality Gate failedFailed conditions 0.0% Coverage on New Code (required ≥ 80%) |
* Split workflow executions in half if request body size is too big (#481) https://ucsc-cgl.atlassian.net/browse/SEAB-6183 * Update artifacts * Reset version * Aggregate execution metrics by status (#482) https://ucsc-cgl.atlassian.net/browse/SEAB-6199 * Aggregate execution metrics by status * Abstract out ExecutionsRequestBodyAggregator and fix tests * Update artifacts * Reset version
Description
This PR deals with the request body size too large error by dividing the number of executions to submit by 2 and re-attempting until it's successful or a different error occurs.
Review Instructions
There should be no 413 Request entity too large errors when ingesting Terra metrics.
Issue
SEAB-6183
Security
If there are any concerns that require extra attention from the security team, highlight them here.
Please make sure that you've checked the following before submitting your pull request. Thanks!
mvn clean install
in the project that you have modified (until https://ucsc-cgl.atlassian.net/browse/SEAB-5300 adds multi-module support properly)