You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For XFS rabbit testing, I'm not able to allocate as much capacity as requested. For example, job f4uyWGibPnw requests #DW jobdw type=xfs name=iotest capacity=576GB but only show 483GB: /dev/mapper/31fbd23b--ebd3--4fb8--92d7--5526b9af75c1_0-lv--0 483G 483G 252K 100% /mnt/nnf/15067813-d386-418d-a71d-39b2f2cbe669-0. Is there a means of increasing the capacity to be consistent with the request?
Additionally, for tests that need 224GB per node, we were find with using a capacity of 250GB, but with recent changes this now needs to be a capacity of 300GB to get a filesystem that can support the 224GB.
This deems to be a problem for users where they are trying to smartly allocate only what they need, but the filesystem comes up short of their needs.
The text was updated successfully, but these errors were encountered:
From Bill Loewe:
Additionally, for tests that need 224GB per node, we were find with using a capacity of 250GB, but with recent changes this now needs to be a capacity of 300GB to get a filesystem that can support the 224GB.
This deems to be a problem for users where they are trying to smartly allocate only what they need, but the filesystem comes up short of their needs.
The text was updated successfully, but these errors were encountered: