-
Notifications
You must be signed in to change notification settings - Fork 1
/
Terraform.txt
766 lines (442 loc) · 32.7 KB
/
Terraform.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
A long time ago, in a data center far, far away an ancient group of powerful beings known as sysadmins used to deploy infrastructure manually. Every server, every route table, every database configuration, and every load balancer was created and managed by hand. It was a dark and fearful age(configuration drift age): fear of downtime, fear of accidental misconfiguration, fear of slow and fragile deployments and fear of what would happen if the sysadmins fell to the dark side(i.e. took a vacation). The good news is that thanks to the DevOps Rebel Alliance, there is now a better way to do things: Terraform.
The manual configuration used on the past brought us to configuration drift problems between servers and hardware devices, as a result the number of bugs increases and developers shrug and say “It works on my machine”, outages and downtime become more frequent.
The syntax of terraform configuration is called HashiCorp Configuration Language (HCL)
It is mean to strike a balance between human readable and editable as well as being machine friendly(because it reads json also).
VPC -> Virtual Private Cloud
terraform init -> cmd to install the required plugins used to run the IAC
terraform plan ->
The plan command lets you see what terraform will do before unleashing it onto the world.
Resources with a plus(+) sign are going to be created, resources with a minus(-) sign are going to be deleted, and resources with a tilde sign(~) are going to be modified.
terraform apply
Cmd used to execute the action and events specified on the file instructions.
I’m on the page 52 -> 54 of 152
Terraform up and running
ami = "ami-2757f631"
ami = "ami-b374d5a5"
The file used for this example is: main.0.tf
terraform destroy -> Command used to destroy or remove the instance specified on the main.tf file.
Pluralsight - MyChannels -> Company
ingress -> rules that are coming intro the instance
egress -> rules that heading out from the instance
The lifecycle parameter is an example of a meta-parameter, or a parameter that exists on just every resource in terraform.
You can add a lifecycle block to any resource to configure how that resource should be created, updated or destroyed.
One of the available lifecycle settings is create_before_destroy, which If set to true, tells terraform to always create a replacement resource before destroying the original resource. For example, if you set create_before_destroy to true in a instance, it means that terraform will first create a new instance, wait for it to come up, and then it will remove the old EC2 instance.
If you set this parameter to true on a specific resource you also have to set it to true on every depending resource.
ASG -> AutoScalingGroup
The ELB has a trick, it can periodically check the health of your EC2 instances and, if an instance is unhealthy, it will automatically stop routing traffic to it.
To enable this feature we should add the health_check block.
To allow the health check request we need to modify the ELB’s security group to allow outbound requests:
I’m on page 68 (70 of 152)
+ aws_autoscaling_group.example
+ aws_elb.example
+ aws_launch_configuration.example
+ aws_security_group.elb
+ aws_security_group.instance
Terraform loads all the files with extension tf, and only one configuration file is allowed by provider.
Terraform will not recurse into any sub-directories. Each file is loaded in alphabetical order, and the contents of each configuration file are appended into one configuration. Terraform also have an override file construct. Override files are merged rather than appended.
Terraform construct a DAG (Directed Acyclic Graph), of that configuration. (This technique is similar to what puppet uses)
terraform apply -var-file="secret.tfvars" (command used to load a file with variables)
Define outputs -> Command used to show output like IP Adresses, etc to the user after to finish the infraesctructure creation.
For example also we could call an output parameter after the infrestructure creation like:
terraform output ip
/*
A Multiline
comment
*/
# Single line comments (With hash sign in front of the text)
Values are assigned with the syntax of: key = value (whitespaces doesn’t matter)
String are in double-quotes
The variables parameters should be used using the var prefix like: ${var.myVariable}
Attributes of other resources:
“${aws_instance.web_server.id}”
Will interpolate the ID attribute from the aws_instance resource named web_server.
The most important thing that we are going to configure with Terraform are the resources.
Resources are the most important component of the infraestructure.
It can be a Physical server, virtual machine, containe, email provider, DNS record, or a database provider.
The combination of a name and type must be unique.
The lifecycle attribute have 3 options:
-create_before_destroy (bool), This flag is used to ensure replacement of a resource is created before the original instance is destroyed. (As an example this can be used to create a new DNS record before to delete the old one)
-prevent_destroy (bool), This flags add extra protection again the destruction of a resource. When this is set to true, any resource that contain this flag will return an error message.
-ignore_changes (list of strings), customize how diffs are evaluated for resources, allowing individual attributes to be ignored through changes. As an example to ignore dynamic changes to the resource from external resources.
When a resource depends on a module, everything int that module must be created before the resource is created.
When you run terraform commands, it looks for files with the .tf extension in the current directory where you run it from. It doesn’t take files from subdirectories. Be careful terraform will load all files with the .tf extension if you run it without arguments.
plan -> Show us what changes terraform will make to our infraestructure
apply -> apply changes to our infrastructure
destroy -> destroy the infrastructure built with terraform.
terraform init -> This should be the first command used, this command install any required provider.
terraform plan
terraform apply
terraform show -> used to show the status of the infrastructure
terraform state list -> used to print a complete state in a human-readable format(because is saved in a json format).
terraform state show
terraform validate -> Command used to validate or verify the syntax and the terraform configuration files, and returns errors about it.
terraform fmt -> command used to format neatly the configuration files.
terraform plan -target aws_launch_configuration.example -> Find out what to do?
terraform plan -destroy -out base-destroy-"date +'%s'" -> find out what to do ? This command is going to save a plan that will destroy all resources as base-destroy-epochtime.plan
terraform graph -> command used to show the graph terraform infrastructure
terraform get ?
terraform graph > myFile.dot -> Then we could use the Graphviz app to see the terraform graph (or the web online version - http://webgraphviz.com/) this tool allow us to create an svg image from the structure.
terraform destroy -target=aws_instance.my_Instance
terraform destroy -> used to delete all the infrastructure created.
The terraform apply command, when you execute it, it will read your templates and it will try to create an infraestructure exactly as you defined in your templates. We will go deeper into how terraform exactly processes templates in a following cha
When the terraform apply execution finish, it shows a summary indicating the number of resources that you’ve added, changed and destroyed, and also the outputs If you have it.
Providers are something to use to configure access to the service you create resources for.
Currently terraform have more than 40 providers, this impressive list includes not only the major cloud providers such as AWS, Google Cloud, Microsoft Azure, it also contains the small ones like Fastly (CDN - Content Delivery Network), Heroku, CloudStack, etc.
Show images free amazon services. (but is not terraform)
AWS is not free to use, but luckily we can use the Free Tier that they provide us for free with certain limitations during one year for new accounts. For example we can use a single EC2 instance for 750 hours a month for 12 months for free, as long as it have the t2.micro type.
AMI (Amazon Machine Image): is a source image which an instance is created from.
You can create your own, use the ones provided by AWS, or select one from a community at AWS MarketPlace.
Security Group (SG): Is like a firewall. You can attach multiples SG to an instance and define inbound and outbound rules. It allows you to configure access not only for IP Ranges, but also for other security groups.
“Using root accounts access keys is considered a bad practice when you work with AWS. You should create new account and assign permission with AIM”
On the next link we can read about best practices for AWS.
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
There are three ways to create infraestructure on AWS.
Using the manul way trough the WebUI, using aws cli from bash console (scripts) and the last one using terraform templates(HCL - HashiCorp Configuration Language).
To save and recover the credentials on terraform we can use
static parameters, environment variables, and a credentials file.
Resources are components of your infraestructure. It can be something comples as a virtual server or something as simple as a DNS record. Each resource belong to a provider, and the type of the resource is suffixed with the provider name.
E.g.
resource “provider-name_resource-type” “resource-name” {
parameter-name = parameter-value
}
resource "aws_security_group" "instance" {
name = "terraform-example-instance"
ingress {
from_port = "${var.server_port}"
to_port = "${var.server_port}"
protocol = "tcp"
cidr_blocks = [“0.0.0.0/0”]
}
}
The combination of resource-type and resource name must be unique in your template.
The elastic IP address always should be used by an active instance, because the cost of it without an instance related to it is higher than usual.
The basic object that we should create is one instance, this object needs to have at least two required parameters: ami and instance_type.
The tags parameter is a map of tags for the instance.
Working with state in terraform
+ If you are curious about the terraform apply command output show, we could learn how the terraform state works.
+ the state file (terraform.tfstate), save all the resources you’ve created. Terraform obtains all the information possible about the resource and writes it to state file. This makes the state file so important that you never want to lose it after you create your environment. Losing the state file mean losing control of your environment through terraform. If you lost for any reason this file you need to remove manually all the resources created, this can be a tedious task if you have a huge infraestructure.
IAC -> Infraestructure as code.
Terraform is focused on IaC (infraestructure as code), meanwhile tools like puppet, chef and ansible are excellent at managing appliccations and services, but they are focused on software configuration rather than building infrastructure components.
This assumes that you are using the “us-east-1“ region and your amazon account has a default vpc (virtual private cloud).
If you are not in us-east-1 region then replace the ami with the appropriate ubuntu ami from your region.
aws_eip -> The aws_eip type manages Amazon elastic ip addresses.
Terraform also has the concept of tainting and untainting resources. Tainting resources marks a single resource to be destroyed and recreated on the next apply. It doesn’t change the resource but rather the current state of the resource. Untainting reverses the marking.
The file can be called anything. We’ve just named it variables.tf for
convenience and identification. Remember all files that end in .tf will be loaded
by Terraform.
Let’s create a few variables in this file now.
Variables can come in a number of types:
• Strings — String syntax. Can also be Boolean’s: true or false.
• Maps — An associative array or hash-style syntax.
• Lists — An array syntax.
Let’s take a look at some string variables first.
Also we could take into account the meta-parameters as a kind of variable like count.
Terraform variables are created with a variable block. They have a name and an
optional type, default, and description.
terraform apply
Error: provider.aws: multiple configurations present; only one configuration is allowed per provider.
provider definition img -> Terraform book page 44 of 331
resource definition page 46 of 331 Terraform book
Is recomended always to execute plan before apply, to avoid break your infrastructure.
Terraform apply command prompts a confirmation to build the infrastructure, you can avoid it adding the -auto-approve flag. (the same happen for terraform plan, but use it carefully because it contains your secrets as text plain)
terraform plan -target aws_type.resource_name. (cmd used to plan only for one resource)
terraform will also parse any environment variable that are prefixed with TF_VAR.
for example, if terraform finds the an environment variable called: TF_VAR_access_code=xyz
it will use the value of the environment variable as the string value of access_code variable.
prevent_destroy. (s3 resource)
prevent_destroy is the second lifecycle setting you’ve seen (the first was cre
ate_before_destroy). When you set prevent_destroy to true on a resource,
any attempt to delete that resource (e.g., by running terraform destroy) will
cause Terraform to exit with an error. This is a good way to prevent accidental
deletion of an important resource, such as this S3 bucket, which will store all of
your Terraform state. Of course, if you really mean to delete it, you can just com‐
ment that setting out.
aws_s3_bucket.terraform_state: aws_s3_bucket.terraform_state: the plan would destroy this resource, but it currently has lifecycle.prevent_destroy set to true. To avoid this error and continue with the plan, either disable lifecycle.prevent_destroy or adjust the scope of the plan using the -target flag. ()
lifecycle {
prevent_destroy = true -> this option throw the previous error message
}
Performing blue-green deployments with terraform (create 2 paralells servers new one and old one, and remmove the old one and use the new one instead of it.)
The idea behind blue-green deployment is that instead of updating existing instances of an
application, you create complete brand new production environment side by side with the existing
one. Then, if it looks good, you switch the traffic to this new environment. If nothing breaks, you
delete the old one. The new environment is called green, while the existing one is blue. As you might
have guessed, the idea goes hand in hand with Immutable Infrastructure concept and extends it beyond
single server to complete clusters of machines.
We could use terrahelp encrypt to encrypt the state file.
RDS (relational database service)
STORING MODEULES REMOTELY
Remember the source attribute of the module?
module "mighty_trousers" {
source = "./modules/application"
Well, it turns out that it doesn't have to be a path to a local directory. In fact, there are multiple
supported sources for modules:
GitHub
BitBucket
Generic Git and Mercurial repositories
HTTP URLs
S3 buckets
AWS Auto-scaling groups (ASG) allow you to adjust your infrastructure needs to the load. They can
automatically increase in size as your usage grows and decrease back to a certain amount of machines
when traffic spike is gone. With ASG, you don't create instances by hand: you only need to
specify launch configuration--consider it a template, an instance will be created from. In addition,
ASG allows configuring scaling based on metrics from CloudWatch or Simple Queue Service
(SQS). We won't use this feature though, as we are looking only for blue-green deployments
implementation.
Auto-scaling group can have ELB in front of it, so it balances all the traffic to instances in this group.
If we want to implement blue-green deployment, we have to use it
But it is also a good thing if we want to achieve Immutable Infrastructure: now we don't have any
other choice except to replace complete machines. We could even remove a key pair attribute in order
to launch configuration to ensure that instances are only replaced and not updated.
terraform graph | dot -Tpng > graph.png
The create_before_destroy boolean parameter allows us to tell Terraform to first create new
resource and then destroy previous one in case of recreation.
The prevent_destroy parameter, also boolean, marks resource as indestructible and can save
you some nerves. One example of a resource that can benefit from this option is an Elastic IP--a
dedicated IP address inside AWS than you can attach to an EC2 instance.
There is another advantage of graphs inside Terraform--it allows you to process nodes in parallel if
they don't depend on each other. By default, up to 10 graph nodes can be processed in parallel. You
could specify the -parallelism flag for apply, plan, and destroy commands, but it's rather an
advanced operation, and in most cases, you don't need it.
1.- Shared storage for state files: To be able to use Terraform to update your infrastructure, each of your team members needs access to the same Terraform state files. That means you need to store those files in a shared location.
2.-Locking state files: As soon as data is shared, you run into a new problem: locking. Without locking, if two team members are running Terraform at the same time, you may run into race conditions as multiple Terraform processes make concurrent updates to the state files, leading to conflicts, data loss, and state file corruption.
3.-Isolating state files: When making changes to your infrastructure, it’s a best practice to isolate different environments. For example, when making a change in the staging environment, you want to be sure that you’re not going to accidentally break production. But how can you isolate your changes if all of your infrastructure is defined in the same Terraform state file?
Secrets: All data in Terraform state files is stored in plaintext. This is a problem because certain Terraform resources need to store sensitive data. For example, if you use the aws_db_instance resource to create a database, Terraform will store the username and password for the database in a state file with no encryption whatsoever. Storing plaintext secrets anywhere is a bad idea, including version control. This is an open issue in the Terraform community, and we only have partial solutions available at this point, as discussed below.
We typically recommend Amazon S3 for the following reasons:
It’s a managed service, so you don’t have to deploy and manage extra infrastructure to use it.
It’s designed for 99.999999999% durability and 99.99% availability, which effectively means it’ll never lose your data or go down.
It supports encryption, which reduces worries about storing sensitive data in state files. Anyone on your team who has access to that S3 bucket will be able to see the state files in an unencrypted form, which is not ideal, but at least it’s encrypted at rest in S3 and in transit thanks to SSL.
It supports versioning, so every revision is stored, and you can always roll back to an older version if something goes wrong.
TERRAFORM has support for secrets with VAULT. Read about it.
Unconfiguring a Backend
If you no longer want to use any backend, you can simply remove the configuration from the file. Terraform will detect this like any other change and prompt you to reinitialize.
Terraform does not have for-loops or other traditional procedural logic built into the
language, so this syntax will not work. However, almost every Terraform resource has
a meta-parameter you can use called count. This parameter defines how many copies
of the resource to create. Therefore, you can create three IAM users as follows:
resource “aws_iam_user” “example” {
count = 3
name = “my-user-${count.index}”
}
When you use the splat character, you get back a list, so you need to wrap the output
variable with brackets:
output "all_arns" {
value = ["${aws_iam_user.example.*.arn}"] #[] because is an array
}
Sets of counted resources using splat (count splat ??)
Terraform doesn’t support if-statements, so this code won’t work. However, you can
accomplish the same thing by using the count parameter and taking advantage of two
properties:
1. In Terraform, if you set a variable to a boolean true (that is, the word true
without any quotes around it), it will be coerced into a 1, and if you set it to a
boolean false, it will be coerced into a 0.
2. If you set count to 1 on a resource, you get one copy of that resource; if you set
count to 0, that resource is not created at all.
Putting these two ideas
count=“${var.enable_autoscaling}” (Terraform up and running page 134 of 205 (114))
count = “${var.enable_autoscaling ? 1 : 0}”
count = “${var.environment == “production” ? 4 : 2}”
cidr = “${var.region} != “us-east-1” ? “172.16.0.0/12” : “172.18.0.0/12”}”
ALSO IS POSSIBLE TO USE the IF else STATEMENT using a combination with count count = “${1-var.enable_autoscaling}”
COUNT has limitations like is not possible to use dynamic values e.g. from a database
CLOUD_WATCH_ALARM in the cluster is over 90% during a 5-minute period:
resource "aws_cloudwatch_metric_alarm" "high_cpu_utilization" {
alarm_name = "${var.cluster_name}-high-cpu-utilization"
namespace = "AWS/EC2"
metric_name = "CPUUtilization"
dimensions = {
AutoScalingGroupName = "${aws_autoscaling_group.example.name}"
}
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 1
period = 300
statistic = "Average"
threshold = 90
unit = "Percent"
}
This works fine for a CPU Utilization alarm, but what if you wanted to add another
alarm that goes off when CPU credits are low?2
SEE TERRAFORM GOTCHAS
Terraform configurations are just a language that make API calls to a provider. (In this case to AWS)
Note: that the region is configurable, you need to have care choosing an AMI Id because
the AMI IDs are differents in each region though they use the same ami ID.
To avoid this we could use the asw_ami search filter option:
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "architecture"
values = ["x86_64"]
}
filter {
name = "image-type"
values = ["machine"]
}
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"]
}
}
Also is possible to use filtering to select different objects(created with a different script)
for example a fixed public IP address.
example
data "aws_vpc" "environment" {
filter {
name = "tag:Name"
values = "${var.environment}"
}
}
Interpolation syntax -> "${var.myVar}"
The interpolation syntax allow you to reference variables, attributes of resources, call functions, etc.
You can perform simple math operations in interpolations such as ${count.index+1}, and also you can use conditionals.
You can use the splat syntax to get a list of all the attributes ${data.aws_subnet.example.*.cidr_block}
resource "aws_instance" "web" {
subnet = "${var.env == "production" ? var.prod_subnet : var.dev_subnet}"
}
The support operators are:
Equality: == and !=
Numerical comparison: >, <, >=, <=
Boolean logic: &&, ||, unary !
A common use case for conditionals is to enable/disable a resource by conditionally setting the count:
resource "aws_instance" "vpn" {
count = "${var.something ? 1 : 0}"
}
The Supported built-in functions can be seen at:
https://www.terraform.io/docs/configuration/interpolation.html#abs-float-
Terragrunt is a tool that we could use to manage the state files.
We are going to use test kitchen as the tool for infrastructure tests.
InsPec + Test Kitchen
RECOMENDATIONS:
The Golden Rule of Terraform:
The master branch of the live repository should be a 1:1 representation of what’s actually deployed in production.
1.- Always use separate state files for every environment.
2.- Tag all your resources and have a kill script.
3.- If you have problems verify duplicated objects into amazon.
4.- You should have a folder structure to store all the terraform scripts. (including symlinks for the parameters configuration)
5.- If you can’t spin up a full copy of your infra and test it, you don’t actually have “infrastructure as code”.
If you go with the more environments option, you may find that managing a large
number of environments by hand can become tedious and error prone. Once you’re
at this stage, you may want to start using a deployment pipeline.
To create infrastructure tests we are going to use Inspec (this tool is used to create chef tests)
Search and add a template folder structure as an example for terraform scripts.
Examples terraform up and running page: 185 of 205 (165)
-> command to use to upload the state files into a s3 amazon bucket.
terraform init -backend-config="bucket=terraform-test-jose" -backend-config="key=terraform.tfstate" -backend-config="region=us-east-1" -backend-config="encrypt=true"
Terraform book, page 166 of 331, have an interesting example (how to execute a script remotely)
Terraform book toy page 96 (226) of 331
Page 233 - Chpter Building a multi-environment architecture.
Page 72 of 205, image of the load balancer and autoscaling example.
Pag 66 of 205 -> there are exampels of variables to visio
Toy pag 103 of 205 Terraform up and running
See chapter three ( page 102 from 231 pdf ) from the book - Getting started with terraform.
Watch chapter 6 -> looks good. How to perform changes without interrupting the service
Toy en la pagina 161 de 231 Getting started terraform
JIC:
https://www.terraform.io/docs/configuration/resources.html
Good tutorial about terraform (the examples works)
https://blog.gruntwork.io/an-introduction-to-terraform-f17df9c6d180
Getting started with terraform contain more advanced info about how to merge state files adding new features
Colaborative infrastructure -> How to manage template and colaborate between many people
Testing infrastructure.
Chapter 5 from the book - Getting started with terraform.
Are we going to use Terraform with some other tools like Puppet, Chef, etc. ?
We could use the next tools with terraform.
Chef, Ansible, and Puppet: For configuration management
Inspec and TestKitchen: For testing
Terragrunt and Terraforming: As a helper for Terraform operations
Git, git-crypt, GitLab, and GitLab CI: For teamwork
S3 and Consul: For storage
Bash and Ruby: For scripting
terraform state rm module.foo (command used to remove a module loaded), sometimes is better to remove the state files
/**********************
Error: Error applying plan:
1 error(s) occurred:
* aws_s3_bucket.terraform_state (destroy): 1 error(s) occurred:
* aws_s3_bucket.terraform_state: Error deleting S3 Bucket: BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
status code: 409, request id: C1CE21A371D3B8DA, host id: z9uzvd/U6opqbfvu2li9RRzp+cq3LGSHRvrhNA5kCCFYukMTc/F9jHAXyhANdCpLbU+NqZSZ+8g= "terraform-test-jose"
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Solves updating the prevent_destroy parameter to true.
*********************/
If for some reasons there are errors on the tfstate bucket file.
You can download and rename the file to terraform.tfstate (because was downloaded as terraform.json)
remove the .terrafom configuration folder
comment the bucket state commands on the terraform script
change the prevent_destroy parameter to true.
execute the command terraform init
execute the command: terraform destroy
Everytime that we change the s3 state configuration we need to remove the hidden .terraform configuration folder and execute again the terraform init command.
https://www.terraform.io/intro/getting-started/variables.html
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
inspec init profile example_profile
inspec exec example_profile
insped exec sample_test.rb
inspec exec /Users/jose.choque/Documents/GitHub/terraform-training/basic.instance.good/specs/base_spec.rb -t ssh://[email protected] -i /Users/jose.choque/AWS-KeyPairs/JoseTest_KeyPair.pem
inspec shell -t ssh://[email protected] -i /Users/jose.choque/AWS-KeyPairs/JoseTest_KeyPair.pem
*******************************************
Use Multiple Types of Automated Tests (page 173 of 205 Terraform up and running)
There are several different types of automated tests you may write for your Terraform
code, including unit tests, integration tests, and smoke tests. Most teams should use a
combination of all three types of tests, as each type can help prevent different types of
bugs.
Unit tests
Unit tests verify the functionality of a single, small unit of code. The definition of
unit varies, but in a general-purpose programming language, it’s typically a single
function or class. The equivalent in Terraform is to test a single module. For
example, if you had a module that deployed a database, then you may want to
add tests that run each time someone modifies this module to verify that you can
successfully run terraform apply on it, that the database boots successfully after
running terraform apply, that you can communicate with the database and
store data in it, and so on.
Integration tests
Integration tests verify that multiple units work together correctly. In a general-
purpose programming language, you might test that several functions or classes
work together correctly. The equivalent in Terraform is to test that several mod‐
ules work together. For example, let’s say you have code in your “live” repo that
combines one module that creates a database, another module that creates a clus‐
ter of web servers, and a third module that deploys a load balancer. You may
want to add integration tests to this repo that run after every commit to verify
Automated Tests | 153
that terraform apply completes without errors, that the database, web server
cluster, and load balancer all boot correctly, that you can talk to the web servers
via the load balancer, and that the data that comes back is coming from the data‐
base.
Smoke tests
Smoke tests run as part of the deployment process, rather than after each com‐
mit. You typically have a set of smoke tests that run each time you deploy to stag‐
ing and production that do a sanity check that the code is working as expected.
For example, when an app is booting, the app might run a quick smoke test to
ensure it can talk to the database and that it is able to receive HTTP requests. If
either of these checks fails, the app can abort the entire deployment before it
causes any problems.
Always run plan before apply. -> terraform plan -out=example.plan
Always test Terraform changes in staging before production.
Testing in staging is especially important because Terraform does not roll back changes
in the case of errors. If you run terraform apply and something goes wrong, you
have to fix it yourself. This is easier and less stressful to do if you catch the error in
staging rather than production.
This workflow should work for most use cases, but there are three caveats to be aware
of that can significantly affect the workflow:
• Some types of Terraform changes can be automated
• Some types of Terraform changes can cause conflicts
• Larger teams may need to use a deployment pipeline
Test Kitchen works by creating the infrastructure we want to test, connecting to
it, and running a series of tests to validate the right infrastructure has been built.
The integration with Terraform comes via an add-on called kitchen-terraform.
The kitchen-terraform add-on is installed via a gem.
We’re going to need to install some prerequisites to get Test Kitchen up and run-
ning.
Prerequisites
The biggest prerequisite is that Test Kitchen requires SSH access to any hosts upon
which you want to run tests. It currently doesn’t support using a bastion or bounce
host to get this connectivity. This means that unless you have SSH access to your
hosts, Test Kitchen will not be able to run tests on hosts in private subnets or
without public IP addresses.
This is a big limitation right now. There’s an issue open to address
using a bastion host to run tests on GitHub but nothing’s available yet.
Test Kitchen is written in Ruby and requires Ruby 2.3.1 or later installed.