This document will show you how to install Prometheus, Grafana, and Grafana Loki on a GKE cluster. It will also provide basics on Grafana use including adding the Loki data source to Grafana in order to view and query logs and exploring the default dashboards provided by the kube-prometheus stack.
Note we are using the kube-prometheus stack. Please refer to this doc for reference.
On MacOS you can use
brew
to install these packages
-
gojsontoyaml library
go install github.com/brancz/gojsontoyaml@latest
go install github.com/google/go-jsonnet/cmd/jsonnet@latest
- wget
mkdir kube-prometheus
cd kube-prometheus
jb init
jb install github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus@main
wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/example.jsonnet -O example.jsonnet
wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/build.sh -O build.sh
jb update
chmod +x build.sh
Note: If you need to update GOPATH in the build.sh file edit line as below:
jsonnet -J vendor -m manifests "${1-example.jsonnet}" | xargs -I{} sh -c 'cat {} | $(go env GOPATH)/bin/gojsontoyaml > {}.yaml; rm -f {}' -- {}
vim memsql.jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
prometheus+:: {
namespaces+: ['singlestore'],
},
},
memsqlMetrics: {
serviceMonitorMyNamespace: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'memsql-metrics',
namespace: 'singlestore',
},
spec: {
endpoints: [
{
path: '/metrics',
targetPort: 'metrics',
},
],
jobLabel: 'app.kubernetes.io/name',
selector: {
matchLabels: {
'app.kubernetes.io/name': 'memsql-cluster',
'app.kubernetes.io/component': 'cluster',
},
},
},
},
},
memsqlClusterMetrics: {
serviceMonitorMyNamespace: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'memsql-cluster-metrics',
namespace: 'singlestore',
},
spec: {
endpoints: [
{
path: '/cluster-metrics',
targetPort: 'metrics',
},
],
jobLabel: 'app.kubernetes.io/name',
selector: {
matchLabels: {
'app.kubernetes.io/name': 'memsql-cluster',
'app.kubernetes.io/component': 'master',
},
},
},
},
},
};
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['memsql-metrics-' + name]: kp.memsqlMetrics[name] for name in std.objectFields(kp.memsqlMetrics) } +
{ ['memsql-cluster-metrics-' + name]: kp.memsqlClusterMetrics[name] for name in std.objectFields(kp.memsqlClusterMetrics) }
Note: Only change the namespace inside of this file where it says
singlestore
to the namespace that you have SingleStore deployed.
./build.sh memsql.jsonnet
kubectl apply --server-side -f manifests/setup
kubectl apply -f manifests/
Run the command below and then go to http://localhost:9090 in your web browser to access Prometheus
kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
Note: further installation methods are provided in the Reference section below
For this installation, we will be creating a Google cloud storage bucket for our backend log store.
Install Tanka
MacOS users can use
brew install tanka
Go to the Google cloud storage browser, and click create a bucket
. We will need to create a service account in order for Loki to access the bucket, so next, go to IAM & Admin > Service Accounts, and create a service account
. I gave my service account Storage Admin
and Storage Object Admin
roles. Once the service account is created, click into the account and go to the Keys
section. There you can click add key
and make sure to save the downloaded file.
kubectl -n monitoring create secret generic google-key --from-file=key.json=<path/to/downloaded/key.json>
Please refer to this documentation for reference.
mkdir loki
cd loki
tk init
vim ~/.kube/config
Find the - cluster
section of the cluster you are using, and locate the server:
section. It will be right below the cluster certificate, and right above the cluster name. This is your kubernetes api server.
tk env add environments/loki --namespace=monitoring --server=<Kubernetes API server>
jb install github.com/grafana/loki/production/ksonnet/loki@main
jb install github.com/grafana/loki/production/ksonnet/promtail@main
This example creates a .loki
file with the loki
username
htpasswd -c .loki loki
It will prompt for a new password. Enter this twice. Then view the contents of the file for input into the htpasswd_contents:
section of the main.jsonnet
Make sure to replace the commented items with values that are specific to your deployment.
vim environments/loki/main.jsonnet
local gateway = import 'loki/gateway.libsonnet';
local loki = import 'loki/loki.libsonnet';
local promtail = import 'promtail/promtail.libsonnet';
local k = import 'ksonnet-util/kausal.libsonnet';
local gcsBucket = import 'gcsBucket.libsonnet';
loki + promtail + gateway + gcsBucket {
_config+:: {
namespace: 'loki',
htpasswd_contents: 'loki:$apr1$i.G5yqP8$LGo1ANcwfGH87unfpIr6m.', //content of your .loki file created above
// GCS variables -- Remove if not using gcs
storage_backend: 'gcs',
gcs_bucket_name: 'sdb-loki', //name of the google bucket
//Set this variable based on the type of object storage you're using.
boltdb_shipper_shared_store: 'gcs',
//Update the object_store and from fields
loki+: {
schema_config: {
configs: [{
from: '2022-02-02', //set this date to the date exactly 2 weeks ago from today
store: 'boltdb-shipper',
object_store: 'gcs',
schema: 'v11',
index: {
prefix: '%s_index_' % $._config.table_prefix,
period: '%dh' % $._config.index_period_hours,
},
}],
},
},
//Update the container_root_path if necessary
promtail_config+: {
clients: [{
scheme:: 'http',
hostname:: 'gateway.%(namespace)s.svc' % $._config,
username:: 'loki',
password:: 'loki', //same password that was used for the loki username in the htpasswd file
container_root_path:: '/var/lib/docker',
}],
},
replication_factor: 3,
consul_replicas: 1,
},
}
vim lib/gcsBucket.libsonnet
//gcsBucket.libsonnet
// Import libs
{
local volumeMount = $.core.v1.volumeMount,
local container = $.core.v1.container,
local statefulSet = $.apps.v1.statefulSet,
local volume = $.core.v1.volume,
local secret = $.core.v1.secret,
local deployment = $.apps.v1.deployment,
local pvc = $.core.v1.persistentVolumeClaim,
local spec = $.core.v1.spec,
local storageClass = $.storage.v1.storageClass,
//distributor
distributor_container+::
container.withEnvMap({GOOGLE_APPLICATION_CREDENTIALS:'/var/secrets/google/key.json'}),
distributor_deployment+:
$.util.secretVolumeMount('google', '/var/secrets/google'),
distributor_args+:: {
'config.expand-env':'true',
},
//querier
querier_container+::
container.withEnvMap({GOOGLE_APPLICATION_CREDENTIALS:'/var/secrets/google/key.json'}),
querier_statefulset+::
$.util.secretVolumeMount('google', '/var/secrets/google'),
querier_args+:: {
'config.expand-env':'true',
},
querier_data_pvc+:
pvc.mixin.spec.withStorageClassName('standard'),
//ingester
ingester_container+::
container.withEnvMap({GOOGLE_APPLICATION_CREDENTIALS:'/var/secrets/google/key.json'}),
ingester_statefulset+:
$.util.secretVolumeMount('google', '/var/secrets/google'),
ingester_args+:: {
'config.expand-env':'true',
},
ingester_data_pvc+::
pvc.mixin.spec.withStorageClassName('standard'),
ingester_wal_pvc+::
pvc.mixin.spec.withStorageClassName('standard'),
//compactor
compactor_container+::
container.withEnvMap({GOOGLE_APPLICATION_CREDENTIALS:'/var/secrets/google/key.json'}),
compactor_statefulset+:
$.util.secretVolumeMount('google', '/var/secrets/google'),
compactor_args+:: {
'config.expand-env':'true',
},
compactor_data_pvc+::
pvc.mixin.spec.withStorageClassName('standard'),
//table-manager
table_manager_container+::
container.withEnvMap({GOOGLE_APPLICATION_CREDENTIALS:'/var/secrets/google/key.json'}),
table_manager_deployment+:
$.util.secretVolumeMount('google', '/var/secrets/google'),
table_manager_args+:: {
'config.expand-env':'true',
},
//query-frontend
query_frontend_container+::
container.withEnvMap({GOOGLE_APPLICATION_CREDENTIALS:'/var/secrets/google/key.json'}),
query_frontend_deployment+:
$.util.secretVolumeMount('google', '/var/secrets/google'),
query_frontend_args+:: {
'config.expand-env':'true',
},
}
tk export manifests environments/loki
This will give you a manifests
directory of yaml files for review.
kubectl --namespace monitoring create -f manifests
Run the command below and then go to http://localhost:3000 in your web browser to access Grafana
kubectl --namespace monitoring port-forward svc/grafana 3000:3000
Go to Configuration > Data Sources
Click add data source and then type in 'Loki'
Click 'Save & Test' and you should see the message below
Now you can explore SingleStore logs through Grafana + Loki
Expand the Log Browser in order to have a helpful UI
Example query with SingleStore logs
Here is further information on LogQL Queries
Kube-prometheus added 24 dashboards to Grafana to get you started with dashboard monitoring
Click on a dashboard to try it out!
Now we have installed a monitoring stack consisting of Prometheus and Grafana Loki. Further reading: Dashboard overview, Grafana alerting
Further installation methods are detailed here
Prereqs: Install Helm, then run the commands below
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm upgrade --install loki --namespace=monitoring grafana/loki-stack --set loki.persistence.enabled=true,loki.persistence.storageClassName=standard,loki.persistence.size=5Gi
Note: you can change the size of the volume claim
Grafana is accessed the same way, but not that the Loki URL is http://loki:3100 when adding the Loki datasource to Grafana after installing with Helm