-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify build-and-run-batch-run
design
#16
Comments
@jeancochrane Can you take a look at this once you're back and drop a quick time estimate for these changes in this issue? Would also like your thoughts on whether these changes seem reasonable. |
As part of this issue, I think we should also rethink the comps compute environment -- perhaps we should separate the comps step from the pipeline and provision a separate set of resources for it, since it takes a long time to execute (and so is expensive) and could potentially have different resource requirements from the main modeling pipeline. |
The current Action for submitting Batch jobs to AWS has a few issues:
build-and-run-batch-job
to use GitHub webooks to push job state #13.workflow_dispatch
) never run the cleanup job, since a PR doesn't get closed to trigger it.After a bunch of research, I propose the following changes:
Switch to Deployments
Rather than polling the Batch job with a running Actions job. We should instead take advantage of GitHub Deployments. This would keep the current strategy for submitting jobs, but would offload status reporting to the Batch job.
This would involve adding an entrypoint script to all jobs which would auth with GitHub (via a GitHub application JWT), then update the deploy status as the Batch job runs.
This completely obviates the need for the polling logic and solves the 6 hour problem. It also allows us to...
Use different deploy environments to manage job types
Instead of controlling job resources by editing the workflow YAML, we can setup different deploy environments, each with an associated job size. This has many advantages:
I envision this as basically a dropdown with the following environments:
Switch to static job queue and compute envs
We currently instantiate the job queue and computer environment for each PR. However, this seems unnecessarily complicated and error-prone. Instead, I propose we create 4 permanent job queues/compute environments, one for each of the environments above.
This way, the only thing we need to Terraform for each workflow is the job definition, as based on the built container and deploy env. We can further simplify the cleanup step to run after receiving a deployment status update from AWS. It would then only need to delete the job definition, since the other resources are static.
The text was updated successfully, but these errors were encountered: