diff --git a/runbooks/source/node-group-changes.html.md.erb b/runbooks/source/node-group-changes.html.md.erb index e6e56e32..06bc18bf 100644 --- a/runbooks/source/node-group-changes.html.md.erb +++ b/runbooks/source/node-group-changes.html.md.erb @@ -1,7 +1,7 @@ --- title: Handling Node Group and Instance Changes weight: 54 -last_reviewed_on: 2023-12-15 +last_reviewed_on: 2024-06-03 review_in: 6 months --- diff --git a/runbooks/source/recycle-all-nodes.html.md.erb b/runbooks/source/recycle-all-nodes.html.md.erb index 6c8344e9..b850d59e 100644 --- a/runbooks/source/recycle-all-nodes.html.md.erb +++ b/runbooks/source/recycle-all-nodes.html.md.erb @@ -1,7 +1,7 @@ --- title: Recycling all the nodes in a cluster weight: 255 -last_reviewed_on: 2024-04-10 +last_reviewed_on: 2024-06-03 review_in: 6 months --- @@ -15,6 +15,8 @@ When a launch template is updated, this will cause all of the nodes to recycle. ## Recycling process +Avoid letting terraform run the changes because terraform can start by deleting all the current nodes and then recreating them causing an outage to users. Instead create a new node group through the aws console, [instructions can be found here](https://runbooks.cloud-platform.service.justice.gov.uk/node-group-changes.html) + AWS will handle the process of killing the nodes with the old launch configurations and launching new nodes with the latest launch configuration. AWS will initially spin up some extra nodes (for around a 60 node cluster it will spin up 8 extra) to provide spaces for pods as old nodes are cordoned, drained and deleted.