Skip to content

Latest commit

 

History

History
232 lines (175 loc) · 14.1 KB

README.md

File metadata and controls

232 lines (175 loc) · 14.1 KB

Deploying and monitoring the ELK Stack

Deploy Elasticsearch and Kibana to monitor

  • The first step is to deploy the Elasticsearch cluster and Kibana instance to monitor, our production stack.

  • Let's deploy:

    > kubectl apply -f 01-monitored-es-kb.yaml
    
    elasticsearch.elasticsearch.k8s.elastic.co/elasticsearch created
    kibana.kibana.k8s.elastic.co/kibana created
  • Once deployed, we can access Kibana. We'll get the Load Balancer public IP:

    > kubectl get svc --selector='kibana.k8s.elastic.co/name=kibana'
    
    NAME             TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)          AGE
    kibana-kb-http   LoadBalancer   10.15.246.85   35.246.189.26   5601:31283/TCP   2m51s
  • And the elastic user password:

    > echo $(kubectl get secret elasticsearch-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)
    
    234IrkC5886H9oPL3dK7lttD
  • We can now point our browser to https://EXTERNAL-IP:5601, in the example https://35.246.189.26:5601, and login with user elastic and the password we extracted from the secret, 234IrkC5886H9oPL3dK7lttD in our example.

    • As we are using self-signed certificates, we'll have to bypass the browser warning.
  • Feel free to load sample data into the Kibana instance, or play around.

  • This is what we would see in Kubernetic, 3 pods for Elasticsearch Daemon Set, 1 for the Kibana service.

    Production Pods

Deploy Logstash

  • We'll now go ahead and deploy a couple of Logstash instances.

  • Logstash cannot be deployed using the ECK operator, and we can use the Helm chart. It's still not GA, so use it with caution in production, as it's not yet fully supported.

  • For the sake of simplicity here, we'll just deploy a couple of Pods named logstash-01 and logstash-02, with a simple pipeline that user Logstash heartbeat input plugin to send an event every 60 seconds. This will be configured using a k8s ConfigMap.

  • Before deploying, we need to make some changes to the file 02-monitored-logstash.yaml, which also describes the Logstash ConfigMap.

    • In order to monitor the cluster, we need to set a value for the monitoring.cluster_uuid:, to bind Logstash metrics to the cluster we created in the first step.

    • The cluster_uuid is in the annotations for that cluster. Execute the following to get it:

      > kubectl get elasticsearch elasticsearch -o json | jq -r '.metadata.annotations["elasticsearch.k8s.elastic.co/cluster-uuid"]'
      
      wZfJcRAOToubncOnkN_6rg
    • And update the ConfigMap accordingly. In our case, we changed monitoring.cluster_uuid: changeme to monitoring.cluster_uuid: wZfJcRAOToubncOnkN_6rg.

    • Next we need to update the elastic user password in the ConfigMap. Note that this is not what we would do in production, we would create a dedicated user with limited permissions. We'll use the elastic user here for simplicity, as this is a getting started example.

    • Update the ConfigMap, in our case, we changed password => "changeme" to password => "234IrkC5886H9oPL3dK7lttD"; the password extracted in the first step.

  • Note that we defined a label, monitoring_label_type: ls, that we'll later use to configure the Metricbeat Logstash module to collect its metrics.

  • We can go ahead and create the Logstash Pods and ConfigMap.

    > kubectl apply -f 02-monitored-logstash.yaml
    
    configmap/logstash-configmap created
    pod/logstash-01 created
    pod/logstash-02 created
  • We can check the logs to make sure our Pods started successfully:

    > kubectl logs -f logstash-01
    
    ...
    [INFO ] 2021-01-06 19:17:54.985 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
    [INFO ] 2021-01-06 19:17:55.021 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
    [INFO ] 2021-01-06 19:17:55.092 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
    
    > kubectl logs -f logstash-02
    
    ...
    [INFO ] 2021-01-06 19:17:54.620 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
    [INFO ] 2021-01-06 19:17:54.651 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
    [INFO ] 2021-01-06 19:17:54.712 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
  • We can also inspect the deployed pods using Kubernetic.

    Logstash Pods Logstash-01 Pod Logs

  • Finally, if we log into Kibana, we should be able to see a new index, logstash-heartbeat-YYYY.MM. In our case, logstash-heartbeat-2021.01.

    Logstash Heartbeat Discover

Deploy a monitoring cluster

  • The third step would be to create a dedicated monitoring cluster: a 1-node Elasticsearch cluster, with 1 Kibana instance. The Kibana instance will also be published using an external Load Balancer.

  • We'll deploy the resources defined on the file 03-monitoring-es-kb.yaml

    > kubectl apply -f 03-monitoring-es-kb.yaml
    
    elasticsearch.elasticsearch.k8s.elastic.co/elasticsearch-monitoring created
    kibana.kibana.k8s.elastic.co/kibana-monitoring created
  • And similar to the first step, we'll get the plublic IP and elastic user password to access our monitoring deployment.

    > kubectl get svc --selector='kibana.k8s.elastic.co/name=kibana-monitoring'
    
    NAME                        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)          AGE
    kibana-monitoring-kb-http   LoadBalancer   10.15.241.68   34.107.31.82   5601:32116/TCP   36s
    
    > echo $(kubectl get secret elasticsearch-monitoring-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)
    
    Y1Qsa3v367NT225jq50LuWgG
  • We can now access our monitoring cluster at https://34.107.31.82:5601 (your EXTERNAL-IP), user elastic, and password Y1Qsa3v367NT225jq50LuWgG (your password) from the previous step.

  • We can also check the monitoring stack using Kubernetic.

    Monitoring Cluster

  • This is still an emtpy cluster, nothing will show up on Kibana Stack Monitoring. We need to deploy Metricbeat and Filebeat to collect metrics and logs, and send them to this cluster. Which we'll do in the final step ahead.

    Empty monitoring

Deploy the Monitoring Beats

  • We'll now create the Metricbeat and Filebeat collectors defined in 04-monitoring-beats.yaml.

  • Metricbeat will be a Deployment with a single Pod, deployed on the same default namespace as the rest of the clusters, collecting data from Elasticsearch, Logstash and Kibana.

  • We configure 3 modules module: elasticsearch, module: kibana and module: logstash, and apply the right conditions to each. For example, for elasticsearch we'll use the label condition equals.kubernetes.labels.stack-monitoring_elastic_co/type: es:

    templates:
        - condition:
            equals.kubernetes.labels.stack-monitoring_elastic_co/type: es
        config:
            - module: elasticsearch
            period: 10s
            hosts: "https://${data.host}:${data.ports.https}"
            username: ${MONITORED_ES_USERNAME}
            password: ${MONITORED_ES_PASSWORD}
            ssl.verification_mode: "none"
            xpack.enabled: true
  • Filebeat is deployed as a DaemonSet. We need it running on each k8s host, to pick any logs we might have por instances running on each kubernetes worker host.

  • Filebeat will use hints to collect logs from the Pods that have co.elastic.logs/enabled: "true". In our example, Elasticsearch and Kibana.

  • Note that it's configured to send data to the elasticsearch-monitoring cluster on the same k8s namespace.

    elasticsearchRef:
      name: elasticsearch-monitoring
  • We can now deploy the file 04-monitoring-beats.yaml:

    > kubectl apply -f 04-monitoring-beats.yaml
    
    beat.beat.k8s.elastic.co/metricbeat-monitoring created
    clusterrole.rbac.authorization.k8s.io/metricbeat created
    serviceaccount/metricbeat created
    clusterrolebinding.rbac.authorization.k8s.io/metricbeat created
    beat.beat.k8s.elastic.co/filebeat-monitoring created
    clusterrole.rbac.authorization.k8s.io/filebeat created
    serviceaccount/filebeat created
    clusterrolebinding.rbac.authorization.k8s.io/filebeat created
  • We can use Kubernetic to have a look at the Metricbeat Deployment and the Filebeat DaemonSet.

    Metricbeat Deployment
    Filebeat DaemonSet Beat's pods

  • Note also that we needed to create some cluster roles and service acounts to give Beats the right authorizations for autodiscover. Make sure you check requirements more recent versions, as they can change.

  • Once this is in place, we can now go back to Kibana Stack Monitoring and we should see our stack.

    Kibana Stack Monitoring

Clean-up

  • When we are done with the testing, it is recommended to remove the deployments to liberate resources (external Load Balancer, Persistent Volume).

  • Remove our deployments:

    > kubectl delete -f 04-monitoring-beats.yaml
    beat.beat.k8s.elastic.co "metricbeat-monitoring" deleted
    clusterrole.rbac.authorization.k8s.io "metricbeat" deleted
    serviceaccount "metricbeat" deleted
    clusterrolebinding.rbac.authorization.k8s.io "metricbeat" deleted
    beat.beat.k8s.elastic.co "filebeat-monitoring" deleted
    clusterrole.rbac.authorization.k8s.io "filebeat" deleted
    serviceaccount "filebeat" deleted
    clusterrolebinding.rbac.authorization.k8s.io "filebeat" deleted
    
    > kubectl delete -f 03-monitoring-es-kb.yaml
    elasticsearch.elasticsearch.k8s.elastic.co "elasticsearch-monitoring" deleted
    kibana.kibana.k8s.elastic.co "kibana-monitoring" deleted
    
    > kubectl delete -f 02-monitored-logstash.yaml
    configmap "logstash-configmap" deleted
    pod "logstash-01" deleted
    pod "logstash-02" deleted
    
    > kubectl delete -f 01-monitored-es-kb.yaml
    elasticsearch.elasticsearch.k8s.elastic.co "elasticsearch" deleted
    kibana.kibana.k8s.elastic.co "kibana" deleted

  • And finally make sure we removed all the resources:

    > kubectl get elastic
    No resources found in default namespace.
    
    > kubectl get service
    NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)          AGE
    kubernetes              ClusterIP      10.23.240.1     <none>         443/TCP          7d11h
    
    > kubectl get pvc
    No resources found in default namespace.