-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lvutil: use wipefs not dd to clear existing signatures (fixes #624) #626
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you tried what happens when you give an entire device (as opposed to a partition) as device-config?
The manpage says that in that case '--force' should be used:
-f, --force
Force erasure, even if the filesystem is mounted. This is required in order to erase a partition-table signature on a block device.
Although in general using 'force' is a bad idea: if the FS is mounted then failing is the right thing (but apparently having active partitions counts as in-use).
OTOH the previous 'dd' would just go and wipe the device anyway without checking whether it was mounted or not, so perhaps using force would be equivalent?
Perhaps wipefs could be used in force mode only to wipe partition table signatures, e.g. |
Maybe their documentation is outdated, there does not seem to be any requirement to force the wipe, at least with the
Edit: works the same with an msdos disklabel |
Last updates makes logging the command systematic - wipefs is destructive so don't hide its execution |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM and as discussed in #624
drivers/util.py
Outdated
@@ -637,6 +637,17 @@ def zeroOut(path, fromByte, bytes): | |||
return True | |||
|
|||
|
|||
def wipefs(blockdev): | |||
"Wipe filesystem signatures from `blockdev`" | |||
cmdlist = ["wipefs", "-a", blockdev] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we invoked dd it was done via util.CMD_DD
which gave /bin/dd
as an explicit path.
I think it would be good to follow the same procedure and create util.CMD_WIPEFS
with an explicit path (I believe that would be /usr/sbin/wipefs
) just to maintain the same level of rigour.
Other than that I don't see an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious about any upsides with this approach, other than consistency with existing code. As for downsides I see:
- hardcoding paths decreases portability
- using a variable defined somewhere else makes the code less readable
The init system is responsible to set a proper PATH
in environment (and nowadays systemd
ensures this even when launching the service from the CLI). If we don't trust the environment we're just doomed anyway: anything could be achieved with a LD_PRELOAD
, including replacing the path passed to execve
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Portability isn't really an issue for this code as it's part of an appliance use-case.
You're right that at present someone could use LD_PRELOAD to do anything they could do by subverting the PATH, but that may not always be the case. Should future hardening work move this call into a situation of elevated privilege, the dynamic linker would ignore any LD_PRELOAD with a path separator, rendering it useless without additional PATH subversion (which a hard-coded path would prevent).
I'm willing to let this go at present, but it's something we'll need to consider if we decide to harden this path in the future.
Ping to reviewers 👋 |
This pull request brings noticeable improvements when XenServer or XCP-ng are installed on disks with existing metadata (ZFS or such), at the cost of a rather simple code change. Plus, Yann answered all questions that were asked, 6 months ago. Would it be possible to review it? |
) Signed-off-by: Yann Dirson <[email protected]>
This PR addresses #624.
With a ZFS fs in
nvme0n1p3
, originally:SMlog:
With patch:
SMlog: