You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've just added new hosts to our existing VMware cluster running ESXi 8.0. Migrating existing VMs from the old hosts to the new hosts and importing TF resources back into state causes disk to be labelled as 'deleted' and new disks are created on apply. The underlying datastores are the same.
I had that happen too, usually it means that the disk attributes changed so much between states that it thinks it's all new. Whenever I have a VM move to a new vCenter/datacenter/cluster/etc, I would do the following:
update the terraform files to reflect the new VM's config
backup the tfstate (just in case), either clear out or remove the offending resource out of the tfstate (depending if you have one tfstate for the resource or a massive tfstate for all resources)
re-import the VM (terraform import 'module.xxx.vsphere_virtual_machine.vm[0]' '//NewDatacenter/vm/VMFolder(s)/xxx'
re-apply the plan (should only update clone section and each disk's keep_on_remove attribute
I would test this with a fresh local tfstate and test vm to get comfortable with this process before affecting live resources.
We've just added new hosts to our existing VMware cluster running ESXi 8.0. Migrating existing VMs from the old hosts to the new hosts and importing TF resources back into state causes disk to be labelled as 'deleted' and new disks are created on apply. The underlying datastores are the same.
Versions
Terraform: 1.7.5
vSphere Provider: 2.7.0
As a workaround we've had to ignore changes to disk.
The text was updated successfully, but these errors were encountered: