Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate capacity obtained vs capacity requested #206

Open
bdevcich opened this issue Sep 4, 2024 · 1 comment
Open

Investigate capacity obtained vs capacity requested #206

bdevcich opened this issue Sep 4, 2024 · 1 comment

Comments

@bdevcich
Copy link
Contributor

bdevcich commented Sep 4, 2024

From Bill Loewe:

For XFS rabbit testing, I'm not able to allocate as much capacity as requested. For example, job f4uyWGibPnw requests #DW jobdw type=xfs name=iotest capacity=576GB but only show 483GB: /dev/mapper/31fbd23b--ebd3--4fb8--92d7--5526b9af75c1_0-lv--0 483G 483G 252K 100% /mnt/nnf/15067813-d386-418d-a71d-39b2f2cbe669-0. Is there a means of increasing the capacity to be consistent with the request?

Additionally, for tests that need 224GB per node, we were find with using a capacity of 250GB, but with recent changes this now needs to be a capacity of 300GB to get a filesystem that can support the 224GB.

This deems to be a problem for users where they are trying to smartly allocate only what they need, but the filesystem comes up short of their needs.

@bdevcich
Copy link
Contributor Author

bdevcich commented Sep 4, 2024

@matthew-richerson did mention a scaling factor in the NnfStorageProfiles which would explain the 250GB -> 300GB change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: 📋 Open
Development

No branches or pull requests

1 participant