Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zvol: Enable zvol threading functionality on FreeBSD #17169

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

fuporovvStack
Copy link
Contributor

Motivation and Context

Make zvol I/O requests processing asynchronous on FreeBSD side. Clone zvol threading logic and required module parameters from Linux side. Make zvol threadpool creation/destruction logic shared for both Linux and FreeBSD. Use OS physio routines in async mode, in case if zvol is exported in cdev mode. Disable volthreading zfs parameter on FreeBSD side by default.

Description

The reason of making volthreading=off by default on FreeBSD side is the poor performance for single zvol fio tests with voltreading=on comparing with normal mode. The problem lays on FreeBSD side.
The FreeBSD does not supports async IO correctly for zvols or any other char devices. We solved this problem for zvols in volmode=dev by modifying FreeBSD AIO logic under aio_qbio() function. It works synchronously, as could be seen from here.
But it is possible to add additional aio_queue function pointer to struct cdevsw, like it doing in struct fileops for example, and pass struct kaiocb AIO objects directly to zvol, where it will be processed asynchronously. In this case the viewable AIO requests performance increasing could be achieved.
If this PR will be accepted, I am ready to start work on modifications for FreeBSD.

How Has This Been Tested?

The tests/functional/zvol/zvol_stress/zvol_stress.ksh testcase was modified to have volthreading on FreeBSD side.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

Make zvol I/O requests processing asynchronous on FreeBSD side. Clone
zvol threading logic and required module parameters from Linux side.
Make zvol threadpool creation/destruction logic shared for both Linux
and FreeBSD. Use OS physio routines in async mode, in case if zvol is
exported in cdev mode. Disable volthreading zfs parameter on FreeBSD
side by default.

Signed-off-by: Fedor Uporov <fuporov.vstack@gmail.com>
@fuporovvStack
Copy link
Contributor Author

@amotin, @markjdb

Copy link
Member

@amotin amotin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am happy to see somebody working on unifying this code. But see some comments inline:

SYSCTL_UINT(_vfs_zfs_vol, OID_AUTO, taskqs, CTLFLAG_RWTUN, &zvol_num_taskqs, 0,
"Number of zvol taskqs");
SYSCTL_UINT(_vfs_zfs_vol, OID_AUTO, sync, CTLFLAG_RWTUN, &zvol_request_sync, 0,
"Synchronously handle bio requests");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are making the code OS-independent, why are the tunables still OS-specific?

typedef struct zv_request_stack {
zvol_state_t *zv;
struct bio *bio;
struct request *rq;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

struct request is a Linux thing. And I suppose it is only a coincidence that both FreeBSD and Linux both have struct bio.

@@ -839,6 +901,9 @@ zvol_cdev_read(struct cdev *dev, struct uio *uio_s, int ioflag)
(zfs_uio_offset(&uio) < 0 || zfs_uio_offset(&uio) > volsize))
return (SET_ERROR(EIO));

if (!zvol_request_sync && !zv->zv_threading)
return (physread(dev, uio_s, ioflag));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the point of this and the same for write below? IIRC cdev read/write calls are always synchronous, so what are you achieving by this?

#ifdef __FreeBSD__
0,
#else
1,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This controls zvol creation time, so pool imported from other OS will have not the intended one.

.bio = bp,
};

if (zvol_request_sync || zv->zv_threading == B_FALSE) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really like it requiring manual configuration. On Linux it was done this way since we found no way to detect what contexts require threading and what not, but I was not happy about it. I think on FreeBSD we might be able to do better, at least in some cases. For example, if we are called from cdev read/write context, then we should always use direct execution, since we know that the API methods is are synchronous and threading give us nothing, only add overhead. In GEOM mode we may check if we are called by one of GEOM threads, in which case we might benefit from threading, since we know those threads are very limited. In case of AIO it might depend: while ZVOLs (or ZFS in general) do require multiple threads for parallel requests, it can be either ZVOLs threads or global AIO threads, depending how kernel's AIO code sees this device, probably with similar results. Sure not having any threading would be the worst, but if in some cases we end up with threading in both kernel AIO and ZVOL's, it would be a waste too. Unfortunately I haven't looked on kernel's AIO code for quite a while, so I am open to ideas how to handle it better there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants