-
Notifications
You must be signed in to change notification settings - Fork 0
Managing your storage
This section is about explaining how to manage your storage using ZFS features, the default disk-based file system used on OSv.
ZFS and ZPOOL command-line tools are available on OSv, so if you're familiar with them, managing the storage will be easier.
WARNING: Some options from these commands may not be available yet.
IMPORTANT: It would be nice to wrap these commands later on (through REST API?), so as not to make storage management file system dependent. After all, we wouldn't like to see users being limited by their lack of ZFS knowledge.
zpool example: Getting I/O statistics from your pool(s):
$ /PATH_TO_OSV/scripts/run.py -e 'zpool.so iostat'
OSv v0.17-11-ge281199
eth0: 192.168.122.15
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
data 155K 9.94G 376 513 8.90M 3.59M
osv 16.8M 9.92G 304 148 12.1M 798K
---------- ----- ----- ----- ----- ----- -----
zfs example: Listing available file systems:
$ /PATH_TO_OSV/scripts/run.py -e 'zfs.so list'
OSv v0.17-11-ge281199
eth0: 192.168.122.15
NAME USED AVAIL REFER MOUNTPOINT
data 106K 9.78G 31K /data
osv 16.6M 9.77G 32K /
osv/zfs 16.4M 9.77G 16.4M /zfs
- The ZFS pool installed on the default virtual disk is named osv.
- Create the file system by executing the following command on your host's terminal:
- The syntax for the command below is: zfs.so create osv/<file system name>.
- The mount point will be slash and the file system name, unless otherwise specified. With that in mind, the mount point for the command below will be /data.
$ /PATH_TO_OSV/scripts/run.py -e 'zfs.so create osv/data'
- Also on host's terminal, check that the additional file system was created successfully:
$ /PATH_TO_OSV/scripts/run.py -e 'zfs.so list'
OSv v0.17-11-ge281199
eth0: 192.168.122.15
NAME USED AVAIL REFER MOUNTPOINT
osv 16.8M 9.77G 32K /
osv/data 31K 9.77G 31K /data
osv/zfs 31K 9.77G 31K /zfs
- Create the image for the additional vdisk using qemu-img:
$ qemu-img create -f qcow2 image.qcow2 10G
- Change run.py script (locate it at /PATH_TO_OSV/scripts/run.py) to initiate OSv instance with the additional vdisk:
- Change the file parameter from the added line accordingly.
diff --git a/scripts/run.py b/scripts/run.py
index cc8cfda..58b6392 100755
--- a/scripts/run.py
+++ b/scripts/run.py
@@ -116,6 +116,7 @@ def start_osv_qemu(options):
args += [
"-device", "virtio-blk-pci,id=blk0,bootindex=0,drive=hd0,scsi=off",
"-drive", "file=%s,if=none,id=hd0,aio=native,cache=%s" % (options.image_file, cache)]
+ args += [ "-drive", "file=/PATH/TO/IMAGE/image.qcow2,if=virtio"]
if options.no_shutdown:
args += ["-no-reboot", "-no-shutdown"]
- Create the pool by executing the following command on your host's terminal:
- /dev/vblk1 is the device associated with your additional vdisk. The second additional vdisk would be /dev/vblk2, and so on.
- The syntax for the command below is: zpool.so create <pool name> <disk(s)>.
- The mount point will be slash and the pool name, unless otherwise specified. With that in mind, the mount point for the command below will be /data.
$ /PATH_TO_OSV/scripts/run.py -e 'zpool.so create data /dev/vblk1'
- Also on host's terminal, check that the additional file system was created successfully:
$ /PATH_TO_OSV/scripts/run.py -e 'zfs.so list'
OSv v0.17-11-ge281199
eth0: 192.168.122.15
NAME USED AVAIL REFER MOUNTPOINT
data 92.5K 9.78G 31K /data
osv 16.6M 9.77G 32K /
osv/zfs 16.4M 9.77G 16.4M /zfs
- From there, /data is available to be used by the application, and it will be mounted automatically. Enjoy! :-)
TODO: There is a lot to be done on this page, explaining how to create a new file system on additional ZFS pool, etc, was the starting point.