-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Activate can be called several times #19
base: master
Are you sure you want to change the base?
Conversation
Bare minimum for a plugin, no operation implemented. Signed-off-by: Yann Dirson <[email protected]>
SR can then be disposed of using "xe sr-forget" and "zpool destroy". Underlying zpool can be either separately created and given to SR.create, or a set of devices can be specified to create a new zpool on. Notes: - `SR.attach` will be necessary to complete `sr-create`: while from a calling user PoV things appear to have gone fine, SMlog reveals a call to `SR.attach`, which naturally failed as not yet implemented. Any attempt to `xe pbd-plug` naturally also fails. See xapi-project/xen-api#5532. - there is no plugin feature to shield from SR.* calls Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
This is the first mandatory method needed by `xe sr-plug` and `xe sr-destroy`. `xe sr-plug` still does not complete, as it makes an `SR.stat` call afterwards. Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
This will be required by `SR.stat`, which is needed by `xe sr-plug`. Signed-off-by: Yann Dirson <[email protected]>
This allows the "PDB plug on create" to work and completes the `xe sr-create` implementation. Note that since we cannot unplug a PBD yet the "xe sr-forget && zpool destroy" we used up till now cannot work any more until that part is done. This is also another mandatory method needed by "xe sr-destroy". Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
This is also another mandatory method needed by `xe sr-destroy`. Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
Essentials of the SR lifecycle is now done. Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
This activates the default implementations from libcow (insertion in vdi_custom_keys table in SR metabase). `Volume.set` is necessary for `xe vdi-create`, and `Volume.unset` is provided for symmetry. Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
Note that if Volume.set were not implemented yet (or indeed cannot complete and returns an error), we face xapi-project/xen-api#5530: if the Volume.set call fails (to add the vdi-type=user custom-key record), then the VDI object is not created (though, as this happens after Volume.create, the zvol and DB entries were indeed created and there suppression was not requested by SMAPI after the error). But since all the VDI information that count are here, "xe sr-scan" will bring those VDI to life (after implementation of SR.ls). Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
This is necessary e.g. for `xe vdi-copy into-vdi-uuid=` Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
This is needed volume copy, `Datapath.attach` for `xe vbd-plug`, and such. Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
FIXME: it seems valid to reduce offline volume size without any special flag and cause data loss, whereas online raising of volume size, which should not be an issue, is refused by default, and does not seem possible to force with `online=true`. Originally-by: Guillaume Thouvenin <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
…escription Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
Snapshot creation Signed-off-by: Yann Dirson <[email protected]>
SR.ls for `xe sr-scan` Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
Volume.stat e.g. for `xe vdi.destroy` or `xe vdi-copy into-vdi-uuid=` FIXME: `xe vdi-copy` then assumes we have a blockdev available for the snapshot, which is wrong - we'd have to either stream the data or create a clone volume.
"Clone a snapshot" only, especially needed for `xe snapshot-revert`. Originally-by: Matias Ezequiel Vara Larsen <[email protected]> Signed-off-by: Yann Dirson <[email protected]>
Allow destroying a VDI when it has been cloned, by first "zfs-promoting" each of those clones to remove all dependencies on it. Makes sure to log the pool state before and after promotions, to be able to understand any problem with this logic. Signed-off-by: Yann Dirson <[email protected]>
Allow destruction of a snapshot with no dependency. Signed-off-by: Yann Dirson <[email protected]>
This is obviously not a long-term solution, we should transition to call into the official zfs python lib instead of spawning processes. Signed-off-by: Yann Dirson <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
When a snapshot with no child clones would block volume destruction, create such a child clone. FIXME: nothing will ever attempt to destroy such child clones, some sort of GC is required.
This gives us a single log where all VDIs are registered, to get the UUID / volume_id mapping for quicker debugging.
Just returning if |
if vdi.active_on: | ||
raise util.create_storage_error( | ||
"SR_BACKEND_FAILURE_24", | ||
["VDIInUse", "The VDI is currently in use"]) | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we still raise an error if the VDI is active on a different domain than the requested one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if it is possible. It looks scary but before this I didn't think that we could plug two VBD for the same VDI... @Wescoeur what should we do? ideas?
This will allow management layers to present a suitable UI for the cases they see fit without having to modify the interface. Signed-off-by: Yann Dirson <[email protected]>
As it is idempotent we just need to return if the VDI is already activate. Signed-off-by: Guillaume <[email protected]>
a202e04
to
b61a02d
Compare
59c85fa
to
7fce50b
Compare
As it is idempotent we just need to return if the VDI is already activate.