Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: Can we also graph the requests and/or limits? #16

Open
TJM opened this issue Feb 6, 2024 · 7 comments
Open

Feat: Can we also graph the requests and/or limits? #16

TJM opened this issue Feb 6, 2024 · 7 comments
Labels
enhancement New feature or request

Comments

@TJM
Copy link

TJM commented Feb 6, 2024

We would like to see what the requests/limits are. We can see it by individually describing our nodes, but it would be really nice to integrate that into this tool.

Thanks!

@GritWins
Copy link

GritWins commented Mar 3, 2024

Requests and Limits are per Pod, right ? Do you mean to bring the Pod level metrics into KubeNodeUsage? Can you give more insight on your requirement

@tossisop
Copy link

tossisop commented Mar 4, 2024

Yes, getting the total allocated resources might be helpful. So we could see the actual usage vs what has been requested on particular node. It would be this section in describe nodes output, correct @TJM ?
image

@TJM
Copy link
Author

TJM commented Mar 4, 2024

Correct @tossisop ... The requests and limits are set on a per container basis (within the pod spec), but they are available in the kubectl describe output as you have shown. It would just be nice to show that "graphically" using this tool. I am not sure if it would look better if it was "included" in the memory stats for example or if it should be separate.

@TJM
Copy link
Author

TJM commented Mar 5, 2024

For what it's worth, I had a script that used to at least gather the data... not as pretty. Of course, since we are text parsing a human interface, it broke between versions :( ... which is why I figured it would be better to get the statistics from some other interface, and if we are going to do that, we might as well present them in a nicer interface.

OLD SCRIPT -- DOES NOT WORK ANYMORE!

set -e

KUBECTL="kubectl"
NODES=$($KUBECTL get nodes --no-headers -o custom-columns=NAME:.metadata.name)

function usage() {
	local node_count=0
	local total_percent_cpu=0
	local total_percent_mem=0
	local readonly nodes=$@

	for n in $nodes; do
		local requests=$($KUBECTL describe node $n | grep -A2 -E "^\\s*CPU Requests" | tail -n1)
		local percent_cpu=$(echo $requests | awk -F "[()%]" '{print $2}')
		local percent_mem=$(echo $requests | awk -F "[()%]" '{print $8}')
		echo "$n: ${percent_cpu}% CPU, ${percent_mem}% memory"

		node_count=$((node_count + 1))
		total_percent_cpu=$((total_percent_cpu + percent_cpu))
		total_percent_mem=$((total_percent_mem + percent_mem))
	done

	local readonly avg_percent_cpu=$((total_percent_cpu / node_count))
	local readonly avg_percent_mem=$((total_percent_mem / node_count))

	echo "Average usage: ${avg_percent_cpu}% CPU, ${avg_percent_mem}% memory."
}

usage $NODES

@tossisop
Copy link

tossisop commented Mar 6, 2024

I can try to implement such functionality. I don't have much free time right now, so I can't promise anything.

@AKSarav
Copy link
Owner

AKSarav commented Mar 6, 2024

I initially thought of adding the pod usage information to the KubeNodeUsage but dropped the plan as I was worried about the UI/UX - But I welcome any design proposals or models - Will keep this thread updated.

@TJM
Copy link
Author

TJM commented Mar 6, 2024

TO be clear, while it would be "useful" to be able to present pod-statistics using this style of UI, what I am requesting is node statistics. Basically that information that is on the end of kubectl describe node (nodename) ... but at least in tabular format, preferably in this "graphical" format.

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests         Limits
  --------           --------         ------
  cpu                1311m (16%)      2 (25%)
  memory             1644515584 (5%)  5895661056 (19%)
  ephemeral-storage  0 (0%)           0 (0%)
  hugepages-1Gi      0 (0%)           0 (0%)
  hugepages-2Mi      0 (0%)           0 (0%)

I think it would be useful to figure out why the cloud provider decided to scale out our cluster nodes when the actual usage was quite low. We assume it has to do with cpu or memory requests, but it is rather difficult to get those for 25+ nodes ;)

@AKSarav AKSarav added the enhancement New feature or request label Jun 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants