Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

decrease maxVolumesPerNode 16->11 #161

Closed
wants to merge 1 commit into from
Closed

decrease maxVolumesPerNode 16->11 #161

wants to merge 1 commit into from

Conversation

IMMORTALxJO
Copy link

Description

In the support Ticket #MTU-67QMC we realised that Vultr CSI driver has incorrect max volumes per device.

Support answer:

We recently discovered that 16 is in fact incorrect and the limit is 11.

So the right way to fix the issue is decreasing the limit in the CSI driver. That will prevent cluster from pods allocation on already full nodes.

@IMMORTALxJO IMMORTALxJO changed the title decrease maxVolumesPerNode decrease maxVolumesPerNode 16->11 Nov 16, 2023
Copy link

Unit Tests and Coverage

Show Output
?   	github.com/vultr/vultr-csi/cmd/csi-vultr-driver	[no test files]
=== RUN   TestCreateVolume
time="2023-11-16T09:05:45Z" level=info msg="Create Volume: called" capabilities="[mount:<> access_mode:<mode:SINGLE_NODE_WRITER > ]" test="create volume" volume-name=volume-test-name
time="2023-11-16T09:05:46Z" level=info msg="Create Volume: created volume" size=10737418240 test="create volume" volume-id=c56c7b6e-15c2-445e-9a5d-1063ab5828ec volume-name=test-bs volume-size=10
--- PASS: TestCreateVolume (1.00s)
=== RUN   TestDeleteVolume
time="2023-11-16T09:05:46Z" level=info msg="Delete volume: called" test="delete volume" volume-id=c56c7b6e-15c2-445e-9a5d-1063ab5828ec
time="2023-11-16T09:05:46Z" level=info msg="Delete Volume: deleted" test="delete volume" volume-id=c56c7b6e-15c2-445e-9a5d-1063ab5828ec
--- PASS: TestDeleteVolume (0.00s)
=== RUN   TestPublishVolume
--- PASS: TestPublishVolume (0.00s)
=== RUN   TestUnPublishVolume
time="2023-11-16T09:05:46Z" level=info msg="Controller Publish Unpublish: called" node-id=c56c7b6e-15c2-445e-9a5d-1063ab5828ec test="delete volume" volume-id=245bb2fe-b55c-44a0-9a1e-ab80e4b5f088
time="2023-11-16T09:05:46Z" level=info msg="Controller Unublish Volume: unpublished" node-id=c56c7b6e-15c2-445e-9a5d-1063ab5828ec test="delete volume" volume-id=245bb2fe-b55c-44a0-9a1e-ab80e4b5f088
--- PASS: TestUnPublishVolume (0.00s)
=== RUN   TestDriverSuite
--- PASS: TestDriverSuite (0.00s)
PASS
time="2023-11-16T09:05:46Z" level=info msg="Start listening with scheme unix, addr /tmp/csi.sock"
time="2023-11-16T09:05:46Z" level=info msg="Listening for connections on address: &net.UnixAddr{Name:"/tmp/csi.sock", Net:"unix"}" address=/tmp/csi.sock proto=unix
	github.com/vultr/vultr-csi/driver	coverage: 23.0% of statements
ok  	github.com/vultr/vultr-csi/driver	1.007s	coverage: 23.0% of statements

Pusher: @IMMORTALxJO, Action: pull_request_target

@optik-aper
Copy link
Member

optik-aper commented Nov 16, 2023

Thanks for the PR. We solved this in #162 After some dependency updates, we'll cut a new release.

@optik-aper optik-aper closed this Nov 16, 2023
@IMMORTALxJO IMMORTALxJO deleted the csi-max-volumes-per-node branch November 16, 2023 20:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants