Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot use entire gpu memory #47

Open
ettelr opened this issue Oct 30, 2023 · 0 comments
Open

Cannot use entire gpu memory #47

ettelr opened this issue Oct 30, 2023 · 0 comments

Comments

@ettelr
Copy link

ettelr commented Oct 30, 2023

Hi,

I have an A100-PCIE-40GB gpu and I an trying to use nos mps dynamic partitioning.
The issue is that Is seems to have some issues with total capacity calculation.
for example I am trying to run 2 pods that require resource : nvidia.com/gpu-20gb: 1 and one of them always stays in pending
while I am able to schedule 1 pod of nvidia.com/gpu-20gb and another 2 pods requesting nvidia.com/gpu-10gb
I faced this issue of not fully using the GPU memory in some more combinations.

Does someone have any idea?
will be much appreciated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant