-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable additional metrics for kubelet metrics receiver #546
Conversation
WalkthroughThe pull request introduces modifications to the Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
charts/k8s-infra/values.yaml (1)
Line range hint
1-24
: Fix duplicateclusterName
configuration.The
clusterName
key appears twice in the configuration:
- Line 12: Under the global section
- Line 24: At the root level
This duplication could lead to confusion and potential misconfiguration.
Apply this diff to remove the duplicate and consolidate the configuration:
# Global override values global: # -- Overrides the Docker registry globally for all images imageRegistry: null # -- Global Image Pull Secrets imagePullSecrets: [] # -- Overrides the storage class for all PVC with persistence enabled. storageClass: null # -- Kubernetes cluster domain # It is used only when components are installed in different namespace clusterDomain: cluster.local # -- Kubernetes cluster name # It is used to attached to telemetry data via resource detection processor clusterName: "" # -- Deployment environment to be attached to telemetry data deploymentEnvironment: "" # -- Kubernetes cluster cloud provider along with distribution if any. # example: `aws`, `azure`, `gcp`, `gcp/autogke`, `azure`, and `other` cloud: other # -- K8s infra chart name override nameOverride: "" # -- K8s infra chart full name override fullnameOverride: "" # -- Whether to enable K8s infra chart enabled: true - # -- Name of the K8s cluster. Used by OtelCollectors to attach in telemetry data. - clusterName: ""
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
charts/k8s-infra/values.yaml
(2 hunks)
🔇 Additional comments (1)
charts/k8s-infra/values.yaml (1)
Line range hint 216-243
: LGTM! The new metrics align with the PR objectives.
The enabled metrics appropriately address:
- CPU usage metrics to replace deprecated utilization metrics
- Uptime metrics for nodes, pods, and containers
- Resource utilization metrics for container limits and requests
Let's verify the metrics configuration:
✅ Verification successful
LGTM! The metrics configuration is correctly implemented
After reviewing the complete configuration file, I can confirm that:
- The new CPU usage metrics are properly enabled and there are no deprecated CPU utilization metrics enabled
- All required uptime metrics for nodes, pods, and containers are correctly configured
- The resource utilization metrics for container limits and requests are properly set up
- The metrics are configured under the correct section (
presets.kubeletMetrics.metrics
)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that the deprecated metrics are not enabled alongside the new ones
# Test: Search for any deprecated CPU utilization metrics. Expect: No results
rg -l "k8s\.(pod|node|container)\.cpu\.utilization.*enabled:\s*true"
# Test: Verify all new metrics are properly configured. Expect: All metrics present
rg "k8s\.(node|pod|container)\.(cpu\.usage|uptime|cpu_limit_utilization|cpu_request_utilization|memory_limit_utilization|memory_request_utilization).*enabled:\s*true"
Length of output: 239
Script:
#!/bin/bash
# Let's try a different approach by checking the entire file content
# This will help us understand the metric configurations in their full context
# Show the complete metrics configuration section
cat charts/k8s-infra/values.yaml
Length of output: 28141
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
charts/k8s-infra/values.yaml (2)
234-241
: LGTM: Resource utilization metrics enabled for capacity monitoring.The addition of CPU and memory limit/request utilization metrics enables better resource monitoring and capacity planning.
Consider setting up alerts based on these metrics to proactively identify:
- Containers approaching resource limits
- Pods with significant disparity between requests and actual usage
- Potential resource constraints that could affect application performance
Line range hint
18-18
: Remove duplicateclusterName
configuration.The file contains two identical
clusterName
configurations:
- In the global section:
global.clusterName
- At the root level:
clusterName
This duplication could lead to confusion and potential misconfiguration.
Recommend keeping only the global configuration and removing the root-level one:
# Global override values global: # -- Overrides the Docker registry globally for all images imageRegistry: null # -- Global Image Pull Secrets imagePullSecrets: [] # -- Overrides the storage class for all PVC with persistence enabled. storageClass: null # -- Kubernetes cluster domain # It is used only when components are installed in different namespace clusterDomain: cluster.local # -- Kubernetes cluster name # It is used to attached to telemetry data via resource detection processor clusterName: "" # -- Deployment environment to be attached to telemetry data deploymentEnvironment: "" # -- Kubernetes cluster cloud provider along with distribution if any. # example: `aws`, `azure`, `gcp`, `gcp/autogke`, `azure`, and `other` cloud: other # -- K8s infra chart name override nameOverride: "" # -- K8s infra chart full name override fullnameOverride: "" # -- Whether to enable K8s infra chart enabled: true - # -- Name of the K8s cluster. Used by OtelCollectors to attach in telemetry data. - clusterName: ""Also applies to: 37-37
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
charts/k8s-infra/values.yaml
(2 hunks)
🔇 Additional comments (2)
charts/k8s-infra/values.yaml (2)
216-217
: LGTM: CPU usage metrics enabled appropriately.
The addition of CPU usage metrics (k8s.node.cpu.usage
, k8s.pod.cpu.usage
, container.cpu.usage
) aligns with the deprecation of utilization metrics and ensures data continuity.
Also applies to: 220-221, 232-233
218-219
: LGTM: Uptime metrics enabled for comprehensive monitoring.
The addition of uptime metrics at node, pod, and container levels enables better monitoring of component lifecycle and availability.
Also applies to: 230-231, 242-243
The k8s.{pod/node/container}.cpu.utilization metrics are deprecated in favor of k8s.{pod/node/container}.cpu.usage. However, they are not enabled by default. We start collecting early, so when the `.cpu. utilisation metrics are removed, we will have enough data backfilled. Add uptime and container req/limit metrics.
Summary by CodeRabbit
New Features
Bug Fixes
clusterName
key to prevent potential configuration conflicts.