Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[question]When I set pod cpu-bind-policy to "FullPCPUsOnly" and got resource-status, which cgroup file should to check #2274

Open
kingeasternsun opened this issue Nov 19, 2024 · 2 comments
Labels
kind/question Support request or question relating to Koordinator lifecycle/stale

Comments

@kingeasternsun
Copy link
Contributor

What happened:
when When I set pod cpu-bind-policy to "FullPCPUsOnly" and got resource-status {"cpuset":"2,54"} from pod annotation, but I never found any cgroup file related to this set.

What you expected to happen:
which cgroup file should i check to make sure this bind-policy is woking

Environment:

  • Koordinator version: - v0.6.2
  • Kubernetes version (use kubectl version): v1.22.5
  • docker/containerd version: containerd 1.5.0
  • OS (e.g: cat /etc/os-release): Ubuntu 20.04.4 LTS
  • Kernel (e.g. uname -a): Linux 5.10.112-11.al8.x86_64 ✨ Add NodeMetric API #1 SMP Tue May 24 16:05:50 CST 2022 x86_64 x86_64 x86_64 GNU/Linux

Anything else we need to know:

@kingeasternsun kingeasternsun added the kind/question Support request or question relating to Koordinator label Nov 19, 2024
@kingeasternsun kingeasternsun changed the title [question]when When I set pod cpu-bind-policy to "FullPCPUsOnly" and got resource-status, which cgroup file should I to check [question]When I set pod cpu-bind-policy to "FullPCPUsOnly" and got resource-status, which cgroup file should to check Nov 19, 2024
@saintube
Copy link
Member

@kingeasternsun Please check the file cpuset.cpus under the container-level cgroup.
e.g. /sys/fs/cgroup/cpuset/kubepods/podxxxxxx/yyyyyy/cpuset.cpus, /sys/fs/cgroup/kubepods.slice/kubepods-podxxxxxx.slice/yyyyyy/cpuset.cpus

Copy link

stale bot commented Feb 18, 2025

This issue has been automatically marked as stale because it has not had recent activity.
This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the issue is closed
    You can:
  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Close this issue or PR with /close
    Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Support request or question relating to Koordinator lifecycle/stale
Projects
None yet
Development

No branches or pull requests

2 participants