Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correctly obtain name Ceph service names #9

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

yadneshk
Copy link
Contributor

@yadneshk yadneshk commented May 22, 2024

This change is related to parsing the names of Ceph containers used to create the path which helps us reach the socket files of ceph services inside /var/lib/ceph/<ceph_fsid>/<ceph_service_name>.

All Ceph containers are named in a pattern ceph-<fsid>-mon-controller-0. However, if ceph-nfs-pacemaker container is deployed the current logic fails to parse container names and the code breaks because it doesn't follow the pattern as other containers.

This change fixes this logic by splitting only when fsid exists in the container name.

@@ -173,17 +173,22 @@ func genContainerCgroupPath(ctype cgroups.ControlType, id string, cgroup2 bool,
node_hostname = strings.Split(node_hostname, ".")[0]

// from `container_name` get the name of the ceph service for which cgroup path is to be found
ceph_service_name := strings.Split(container_name, node_hostname)[0]
ceph_service_name = strings.Trim(ceph_service_name, "-")
ceph_service_name = strings.SplitAfter(ceph_service_name, fmt.Sprintf("ceph-%s-", ceph_fsid))[1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ceph_service_name is not used afterwards anymore, in that case it can completely go away?

Copy link
Contributor Author

@yadneshk yadneshk May 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realized using ceph_service_name improves code readability, so I've reintroduced the variable. In the latest changes, we obtain ceph service's name from the container name.
This change is needed because of https://bugzilla.redhat.com/show_bug.cgi?id=2279805

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might have changed after the comment, but right now ceph_service_name is used at L191.

Copy link
Member

@mrunge mrunge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Yadnesh, I am a bit unclear on the change, the ceph_service_name computation is either not necessary at all, or the relation to container_name is unclear.

Ceph containers like "ceph-nfs-pacemaker" don't have either
ceph-fsid nor node's hostname in their names.

This change fixes parsing names of such containers.
@vyzigold
Copy link

vyzigold commented Sep 3, 2024

I'm not familiar enough with go strings functions. What's the change here? The code looks really similar. Was it failing on the strings.* calls when something was missing in the string and the strings.Contains conditions fix that?

@vyzigold
Copy link

vyzigold commented Sep 3, 2024

Sorry about the previous comment. Since that I learnt to read and read the PR description. I think I see how the code works for ceph-<fsid>-mon-controller-0 . I guess the ceph_service_name in that case would be "mon"? But what about the other case (ceph-nfs-pacemaker)? I guess that would stay "ceph-nfs-pacemaker"? Is that correct or should it also be parsed to something smaller?

@yadneshk
Copy link
Contributor Author

yadneshk commented Sep 4, 2024

@vyzigold Yes, for ceph-nfs-pacemaker it would be ceph-nfs-pacemaker. Since the socket path for it is /var/lib/<ceph_fsid>/ceph-nfs-pacemaker it must remain as it is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

5 participants