-
Notifications
You must be signed in to change notification settings - Fork 481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How does CI\CD fit into all of this? #59
Comments
Indeed, infrastructure CI/CD is an area of active development to identify the right approach. Our latest thinking on this is summarized in https://gruntwork.io/guides/automations/how-to-configure-a-production-grade-ci-cd-setup-for-apps-and-infrastructure-code/, which has some answers to your questions (but not all). The key idea here is that module releases should be completely independent of actual infrastructure releases, and you should be able to test and validate your infrastructure modules without deploying to live environments in your In that world, the flow would be more like:
You can then configure automation pipelines for releasing this code depending on your tolerance. For example, a common workflow would be:
|
@yorinasub17 Thanks for your answer, however, we have made the experience that very often our modules have a lot of (expensive) dependencies. For example we use Azure user managed identities to handle passwordless authentication, but they need a special role assignment scoped to our kubernetes cluster (AKS) to do their job and are allowed to talk with our applications. So if we wanted to test this (otherwise very simple) module in isolation, we would need to spawn and configure an entire AKS just for the test and tear it down after that. Also talking about pipeline agents and the fact that multiple teams actively work on all the modules, it seems very hard to follow the setup you described. |
Yes as mentioned above, infrastructure CI/CD is an area of active development and research. You are not going to find a single solution that works for all use cases. It all comes down to what tradeoffs you are willing to make based on your level of tolerance for risk. Given that, for the situation you described, you have a few options:
|
This setup actually works much better in larger scale teams, since the module testing can happen in isolation (because they don't conflict). The more you share resources, the more contention you are going to have across the teams, so the advantage of always deploying new resources (even if expensive) is that you can have the confidence that teams can work in isolation (in "unit") without worrying about test contention and conflicts that pollute their environments when developing the modules. |
I think this is still an open problem today.
Given that modules are stored in another repo. Integrating all of this with Jenkins would be like:
terraform plan
on the PRWe can suppose that on 3.
terragrunt plan
is run under/non-prod/us-east-1/stage
because that's where we deploy stuff under development. So we know the environment but not the module, is it mysql or webserver-cluster?Furthermore, if we want to deploy stuff in /non-prod/us-east-1/qa how are we going to do that via Jenkins? Should I have like a separate Job just to run the plan/apply command? The same issue arises, how doeas Jenkins know what module to plan/apply?
The text was updated successfully, but these errors were encountered: