You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 8, 2024. It is now read-only.
I'm trying to put enhanceio over two volumes in same drbd resource (on volume for caché disk and another for slow hdds cached).
To be able to create enhanceio udev rules on those volumes I have to create clustered vg/lv over each drbd volume, I assume there is no other way of doing it. Well, my question is if enhanceio udev rules will fire as soon as drbd are up/primary or I should create somekind of script to do it.
Also notice that we would like to implement a dual primary drbd with pacemaker+corosync, so both nvmes will be a drbd resource and both hdds another. Will this work when clients access same data (rw) over gfs2 through both balanced nodes or this configuration is not possible with enhanceio?
If udev rules are fired automatically, then there is no need for a new pacemaker resource, but if not, then we will have to create one.
We did all this it with dmcache and was working, but performance results were not enough for us, so we tried enhanceio that seems to give us better performance.
Thanks.
UPDATE
It is not working with enhanceio over dual volume resource drbd (one for nvme and another for hdd), nodes fencing as data gets outdated on one peer. But it is working if we set a simple drbd resource over local enhanceio caches, but we think maybe will corrupt data on node fail, must do some tests to verify.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi,
I'm trying to put enhanceio over two volumes in same drbd resource (on volume for caché disk and another for slow hdds cached).
To be able to create enhanceio udev rules on those volumes I have to create clustered vg/lv over each drbd volume, I assume there is no other way of doing it. Well, my question is if enhanceio udev rules will fire as soon as drbd are up/primary or I should create somekind of script to do it.
Also notice that we would like to implement a dual primary drbd with pacemaker+corosync, so both nvmes will be a drbd resource and both hdds another. Will this work when clients access same data (rw) over gfs2 through both balanced nodes or this configuration is not possible with enhanceio?
If udev rules are fired automatically, then there is no need for a new pacemaker resource, but if not, then we will have to create one.
We did all this it with dmcache and was working, but performance results were not enough for us, so we tried enhanceio that seems to give us better performance.
Thanks.
UPDATE
It is not working with enhanceio over dual volume resource drbd (one for nvme and another for hdd), nodes fencing as data gets outdated on one peer. But it is working if we set a simple drbd resource over local enhanceio caches, but we think maybe will corrupt data on node fail, must do some tests to verify.
The text was updated successfully, but these errors were encountered: