Project status: alpha. This operator and its respective application (k8s-network-prober) are still under active development. Main functionality of inter-Pod latency measurement is implemented, but internal design and object API might change.
The K8s Network Prober Operator provides native API objects to manage the k8s-network-prober application. The purpose of this operator is to:
- Inject the
k8s-network-prober
sidecar in every Pod that matches the configured selector via a mutating webhook - Configure the sidecar container.
The agent exports Prometheus metrics about the inter-Pod latency (RTT to be more precise). See the Prometheus section to install end configure Prometheus server.
This operator installs only one CRD, called NetworkProber
, which current implementation offers the following specification properties:
PodSelector
: label selector that specifies the set of Pods where the agent container is injected.HttpPort
: the TCP port the agent uses to exchange probe requests.PollingPeriod
: the time period between two consecutive probe requests.HttpPrometheusPort
: the TCP port the agent uses to export Prometheus metrics.AgentImage
: the container image of the agent.
Here is an example of the NetworkProber
object. This will inject the agent in every Pod having the label app=net-prober
.
apiVersion: probes.bigmikes.io/v1alpha1
kind: NetworkProber
metadata:
name: networkprober-sample
spec:
podSelector:
matchLabels:
app: net-prober
httpPort: "9090"
pollingPeriod: "10s"
httpPrometheusPort: "1222"
agentImage: "bigmikes/kube-net-prober:test-version-v8"
You’ll need a Kubernetes cluster to run against. You can use KIND to get a local cluster for testing, or run against a remote cluster.
Note: Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster kubectl cluster-info
shows).
- Install Instances of Custom Resources:
kubectl apply -f config/samples/
- Build and push your image to the location specified by
IMG
:
make docker-build docker-push IMG=<some-registry>/k8s-network-prober-operator:tag
- Deploy the controller to the cluster with the image specified by
IMG
:
make deploy IMG=<some-registry>/k8s-network-prober-operator:tag
To delete the CRDs from the cluster:
make uninstall
UnDeploy the controller to the cluster:
make undeploy
Prometheus and Grafana stack can be install with its own operator. In short, follow these instructions:
- Install Prometheus Operator and CRDs
git clone https://github.com/prometheus-operator/kube-prometheus.git
kubectl create -f manifests/setup
kubectl create -f manifests/
- Modify the k8s-operator
ClusterRole
to have permissions to access Pods
kubectl edit ClusterRole prometheus-k8s
Then add the Pod read permissions:
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- Create a
PodMonitor
object with a label selector that matches the Pods, with metrics endopoint port callednp-prometheus
.
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: example-app
spec:
selector:
matchLabels:
app: net-prober
podMetricsEndpoints:
- port: np-prometheus
namespaceSelector:
any: true
- Open Grafana and check the
netprober_interpod_latency_s_bucket
historgram metric.
kubectl --namespace monitoring port-forward svc/grafana 3000
This is my very first Kubernetes Operator, so any feedback is welcome. Feel free to open PRs or issues to this repo.
This project aims to follow the Kubernetes Operator pattern
It uses Controllers which provides a reconcile function responsible for synchronizing resources untile the desired state is reached on the cluster
- Install the CRDs into the cluster:
make install
- Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
make run
NOTE: You can also run this in one step by running: make install run
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
make manifests
NOTE: Run make --help
for more information on all potential make
targets
More information can be found via the Kubebuilder Documentation
Copyright 2022.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.