Adding a total_iodepth option that can be used with librbdfio #324
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Add an option to the CBT configuration yaml file called total_iodepth. The total_iodepth is then split evenly among the number of volumes used for the test. If the total_iodepth for a run does not divide evenly into the number of volumes, then any remainder will be assigned 1iodepth at a time to the volumes, starting at 0.
For (a simple) example:
For an total_iodepth of 18 and volumes_per_client of 5, the following iodepth allocations would occur:
If the number of volumes specified is such that there is not enough iodepth for 1 per volume, then the number of volumes for that test will be reduced so that an iodepth of 1 per volume can be achieved.
Example:
For volumes_per_client = 5 and total_iodepth=4, the benchmark would be run with 4 volumes, each of iodepth 1
Testing
Manual testing was done for a number of scenarios, with and without using workloads The below are the output from debug statements added for the purposes of testing.
iodepth=32, volumes_per_client=3
CHDEBUG: fio_cmd for volume 0 is /usr/local/bin/fio --ioengine=rbd --clientname=admin --pool=cbt-librbdfio --rbdname=cbt-librbdfio-
hostname -f
-0 --invalidate=0 --rw=randwrite --output-format=json,normal --runtime=300 --numjobs=1 --direct=1 --bs=4096B --iodepth=32 --end_fsync=0 --norandommap --write_iops_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.0 --write_bw_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.0 --write_lat_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.0 --log_avg_msec=101 --name=cbt-librbdfio-hostname -f
-file-0 > /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.0CHDEBUG: fio_cmd for volume 1 is /usr/local/bin/fio --ioengine=rbd --clientname=admin --pool=cbt-librbdfio --rbdname=cbt-librbdfio-
hostname -f
-1 --invalidate=0 --rw=randwrite --output-format=json,normal --runtime=300 --numjobs=1 --direct=1 --bs=4096B --iodepth=32 --end_fsync=0 --norandommap --write_iops_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.1 --write_bw_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.1 --write_lat_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.1 --log_avg_msec=101 --name=cbt-librbdfio-hostname -f
-file-0 > /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.1CHDEBUG: fio_cmd for volume 2 is /usr/local/bin/fio --ioengine=rbd --clientname=admin --pool=cbt-librbdfio --rbdname=cbt-librbdfio-
hostname -f
-2 --invalidate=0 --rw=randwrite --output-format=json,normal --runtime=300 --numjobs=1 --direct=1 --bs=4096B --iodepth=32 --end_fsync=0 --norandommap --write_iops_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.2 --write_bw_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.2 --write_lat_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.2 --log_avg_msec=101 --name=cbt-librbdfio-hostname -f
-file-0 > /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-032/randwrite/output.2total_iodepth=32, volumes_per_client=3
CHDEBUG: fio_cmd for volume 0 is /usr/local/bin/fio --ioengine=rbd --clientname=admin --pool=cbt-librbdfio --rbdname=cbt-librbdfio-
hostname -f
-0 --invalidate=0 --rw=randwrite --output-format=json,normal --runtime=300 --numjobs=1 --direct=1 --bs=4096B --iodepth=11 --end_fsync=0 --norandommap --write_iops_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.0 --write_bw_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.0 --write_lat_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.0 --log_avg_msec=101 --name=cbt-librbdfio-hostname -f
-file-0 > /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.0CHDEBUG: fio_cmd for volume 1 is /usr/local/bin/fio --ioengine=rbd --clientname=admin --pool=cbt-librbdfio --rbdname=cbt-librbdfio-
hostname -f
-1 --invalidate=0 --rw=randwrite --output-format=json,normal --runtime=300 --numjobs=1 --direct=1 --bs=4096B --iodepth=11 --end_fsync=0 --norandommap --write_iops_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.1 --write_bw_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.1 --write_lat_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.1 --log_avg_msec=101 --name=cbt-librbdfio-hostname -f
-file-0 > /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.1CHDEBUG: fio_cmd for volume 2 is /usr/local/bin/fio --ioengine=rbd --clientname=admin --pool=cbt-librbdfio --rbdname=cbt-librbdfio-
hostname -f
-2 --invalidate=0 --rw=randwrite --output-format=json,normal --runtime=300 --numjobs=1 --direct=1 --bs=4096B --iodepth=10 --end_fsync=0 --norandommap --write_iops_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.2 --write_bw_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.2 --write_lat_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.2 --log_avg_msec=101 --name=cbt-librbdfio-hostname -f
-file-0 > /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-003/iodepth-016/randwrite/output.2total_iodepth=2, volumes_per_client=3
11:13:28 - WARNING - cbt - The total iodepth requested: 2 is less than 1 per volume.
11:13:28 - WARNING - cbt - Number of volumes per client will be reduced from 3 to 2
CHDEBUG: fio_cmd for volume 0 is /usr/local/bin/fio --ioengine=rbd --clientname=admin --pool=cbt-librbdfio --rbdname=cbt-librbdfio-
hostname -f
-0 --invalidate=0 --rw=randwrite --output-format=json,normal --runtime=300 --numjobs=1 --direct=1 --bs=4096B --iodepth=1 --end_fsync=0 --norandommap --write_iops_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-002/iodepth-016/randwrite/output.0 --write_bw_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-002/iodepth-016/randwrite/output.0 --write_lat_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-002/iodepth-016/randwrite/output.0 --log_avg_msec=101 --name=cbt-librbdfio-hostname -f
-file-0 > /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-002/iodepth-016/randwrite/output.0CHDEBUG: fio_cmd for volume 1 is /usr/local/bin/fio --ioengine=rbd --clientname=admin --pool=cbt-librbdfio --rbdname=cbt-librbdfio-
hostname -f
-1 --invalidate=0 --rw=randwrite --output-format=json,normal --runtime=300 --numjobs=1 --direct=1 --bs=4096B --iodepth=1 --end_fsync=0 --norandommap --write_iops_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-002/iodepth-016/randwrite/output.1 --write_bw_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-002/iodepth-016/randwrite/output.1 --write_lat_log=/tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-002/iodepth-016/randwrite/output.1 --log_avg_msec=101 --name=cbt-librbdfio-hostname -f
-file-0 > /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-00004096/concurrent_procs-002/iodepth-016/randwrite/output.1I also tested with an array of total_iodepths, but the output is too long for here so instead I will attach the file containing the results:
total_iodepth_testing.txt
I'll update with teuthology results once they have run