Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Activate can be called several times #19

Open
wants to merge 34 commits into
base: master
Choose a base branch
from

Conversation

gthvn1
Copy link
Contributor

@gthvn1 gthvn1 commented Mar 26, 2024

As it is idempotent we just need to return if the VDI is already activate.

ydirson added 26 commits March 25, 2024 15:14
Bare minimum for a plugin, no operation implemented.

Signed-off-by: Yann Dirson <[email protected]>
SR can then be disposed of using "xe sr-forget" and "zpool destroy".

Underlying zpool can be either separately created and given to
SR.create, or a set of devices can be specified to create a new zpool
on.

Notes:
- `SR.attach` will be necessary to complete `sr-create`: while from a
  calling user PoV things appear to have gone fine, SMlog reveals a
  call to `SR.attach`, which naturally failed as not yet implemented.
  Any attempt to `xe pbd-plug` naturally also fails.  See
  xapi-project/xen-api#5532.
- there is no plugin feature to shield from SR.* calls

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
This is the first mandatory method needed by `xe sr-plug` and
`xe sr-destroy`.

`xe sr-plug` still does not complete, as it makes an `SR.stat` call
afterwards.

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
This will be required by `SR.stat`, which is needed by `xe sr-plug`.

Signed-off-by: Yann Dirson <[email protected]>
This allows the "PDB plug on create" to work and completes the `xe
sr-create` implementation.

Note that since we cannot unplug a PBD yet the "xe sr-forget && zpool
destroy" we used up till now cannot work any more until that part is
done.

This is also another mandatory method needed by "xe sr-destroy".

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
This is also another mandatory method needed by `xe sr-destroy`.

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
Essentials of the SR lifecycle is now done.

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
This activates the default implementations from libcow (insertion in
vdi_custom_keys table in SR metabase).  `Volume.set` is necessary for
`xe vdi-create`, and `Volume.unset` is provided for symmetry.

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
Note that if Volume.set were not implemented yet (or indeed cannot
complete and returns an error), we face xapi-project/xen-api#5530: if
the Volume.set call fails (to add the vdi-type=user custom-key
record), then the VDI object is not created (though, as this happens
after Volume.create, the zvol and DB entries were indeed created and
there suppression was not requested by SMAPI after the error).  But
since all the VDI information that count are here, "xe sr-scan" will
bring those VDI to life (after implementation of SR.ls).

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
This is necessary e.g. for `xe vdi-copy into-vdi-uuid=`

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
This is needed volume copy, `Datapath.attach` for `xe vbd-plug`, and such.

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
FIXME: it seems valid to reduce offline volume size without any special
flag and cause data loss, whereas online raising of volume size, which
should not be an issue, is refused by default, and does not seem possible
to force with `online=true`.

Originally-by: Guillaume Thouvenin <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
…escription

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
Snapshot creation

Signed-off-by: Yann Dirson <[email protected]>
SR.ls for `xe sr-scan`

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
Volume.stat e.g. for `xe vdi.destroy` or `xe vdi-copy into-vdi-uuid=`

FIXME: `xe vdi-copy` then assumes we have a blockdev available for the
snapshot, which is wrong - we'd have to either stream the data or create
a clone volume.
"Clone a snapshot" only, especially needed for `xe snapshot-revert`.

Originally-by: Matias Ezequiel Vara Larsen <[email protected]>
Signed-off-by: Yann Dirson <[email protected]>
Allow destroying a VDI when it has been cloned, by first "zfs-promoting"
each of those clones to remove all dependencies on it.

Makes sure to log the pool state before and after promotions, to be able
to understand any problem with this logic.

Signed-off-by: Yann Dirson <[email protected]>
Allow destruction of a snapshot with no dependency.

Signed-off-by: Yann Dirson <[email protected]>
This is obviously not a long-term solution, we should transition to call
into the official zfs python lib instead of spawning processes.

Signed-off-by: Yann Dirson <[email protected]>
When a snapshot with no child clones would block volume destruction,
create such a child clone.

FIXME: nothing will ever attempt to destroy such child clones, some sort
of GC is required.
This gives us a single log where all VDIs are registered, to get the
UUID / volume_id mapping for quicker debugging.
@gthvn1 gthvn1 requested review from ydirson and Wescoeur March 26, 2024 10:52
@gthvn1 gthvn1 self-assigned this Mar 26, 2024
@gthvn1
Copy link
Contributor Author

gthvn1 commented Mar 26, 2024

Just returning if vdi.active_on to be able to activate VDI twice is not enough because there is a refcounter on the meta.pickle file. It is the IntelliCache in the datapath. So at least we need to upgrade the refcounter but it is not just a counter so I'm trying to understand what we can and cannot do in datapath.py:activate

Comment on lines 58 to +59
if vdi.active_on:
raise util.create_storage_error(
"SR_BACKEND_FAILURE_24",
["VDIInUse", "The VDI is currently in use"])
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we still raise an error if the VDI is active on a different domain than the requested one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if it is possible. It looks scary but before this I didn't think that we could plug two VBD for the same VDI... @Wescoeur what should we do? ideas?

ydirson and others added 5 commits March 27, 2024 14:18
This will allow management layers to present a suitable UI for the cases
they see fit without having to modify the interface.

Signed-off-by: Yann Dirson <[email protected]>
As it is idempotent we just need to return if the VDI is already
activate.

Signed-off-by: Guillaume <[email protected]>
@gthvn1 gthvn1 force-pushed the gtn-make-activate-idempotent branch from a202e04 to b61a02d Compare March 28, 2024 08:04
@ydirson ydirson force-pushed the yd/smapiv3-zfs-2 branch 2 times, most recently from 59c85fa to 7fce50b Compare April 4, 2024 09:48
Base automatically changed from yd/smapiv3-zfs-2 to master April 4, 2024 09:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants