The Hasura Prometheus Connector allows for connecting to a Prometheus database giving you an instant GraphQL API on top of your Prometheus data.
This connector is built using the Go Data Connector SDK and implements the Data Connector Spec.
The connector can introspect and automatically transform available metrics on the Prometheus server to collection queries. Each metric has a common structure:
{
<label_1>
<label_2>
# ...
timestamp
value
labels
values {
timestamp
value
}
}
Note
Labels and metrics are introspected from the Prometheus server at the current time. You need to introspect again whenever there are new labels or metrics.
The configuration plugin introspects labels of each metric and defines them as collection columns that enable the ability of Hasura permissions and remote join. The connector supports basic comparison filters for labels.
{
process_cpu_seconds_total(
where: {
timestamp: { _gt: "2024-09-24T10:00:00Z" }
job: {
_eq: "node"
_neq: "prometheus"
_in: ["node", "prometheus"]
_nin: ["ndc-prometheus"]
_regex: "prometheus.*"
_nregex: "foo.*"
}
}
args: { step: "5m", offset: "5m", timeout: "30s" }
) {
job
instance
timestamp
value
values {
timestamp
value
}
}
}
The connector can detect if you want to request an instant query or range query via the timestamp
column:
_eq
: instant query at the exact timestamp._gt
<_lt
: range query.
The range query mode is default If none of the timestamp operators is set.
The timestamp
and value
fields are the result of the instant query. If the request is a range query, timestamp
and value
are picked as the last item of the values
series.
step
: the query resolution step width in duration format or float number of seconds. The step should be explicitly set for range queries. Even though the connector can estimate the approximate step width the result may be empty due to too far interval.offset
: the offset modifier allows changing the time offset for individual instant and range vectors in a query.timeout
: the evaluation timeout of the request.fn
: the array of composable PromQL functions.flat
: flatten grouped values out of the root array. Use the runtime setting if the value is null.
The fn
argument is an array of PromQL function parameters. You can set multiple functions that can be composed into the query. For example, with this PromQL query:
sum by (job) (rate(process_cpu_seconds_total[1m]))
The equivalent GraphQL query will be:
{
process_cpu_seconds_total(
where: { timestamp: { _gt: "2024-09-24T10:00:00Z" } }
args: { step: "5m", fn: [{ rate: "5m" }, { sum: [job] }] }
) {
job
timestamp
value
values {
timestamp
value
}
}
}
When simple queries don't meet your need you can define native queries in the configuration file with prepared variables with the ${<name>}
template. Native queries are defined as collections.
metadata:
native_operations:
queries:
service_up:
query: up{job="${job}", instance="${instance}"}
labels:
instance: {}
job: {}
arguments:
instance:
type: String
job:
type: String
The native query is exposed as a read-only function with 2 required fields job
and instance
.
{
service_up(
args: { step: "1m", job: "node", instance: "node-exporter:9100" }
where: {
timestamp: { _gt: "2024-10-11T00:00:00Z" }
job: { _in: ["node"] }
}
) {
job
instance
values {
timestamp
value
}
}
}
Note
Labels aren't automatically added. You need to define them manually.
Note
Label and value boolean expressions in where
are used to filter results after the query was executed.
Execute a raw PromQL query directly. This API should only be used by the admin. The result contains labels and values only.
{
promql_query(
query: "process_cpu_seconds_total{job=\"node\"}"
start: "2024-09-24T10:00:00Z"
step: "5m"
) {
labels
values {
timestamp
value
}
}
}
Operation Name | REST Prometheus API |
---|---|
prometheus_alertmanagers | /api/v1/alertmanagers |
prometheus_alerts | /api/v1/alerts |
prometheus_label_names | /api/v1/labels |
prometheus_label_values | /api/v1/label/<label_name>/values |
prometheus_rules | /api/v1/rules |
prometheus_series | /api/v1/series |
prometheus_targets | /api/v1/targets |
prometheus_targets_metadata | /api/v1/targets/metadata |
connection_settings:
authentication:
basic:
username:
env: PROMETHEUS_USERNAME
password:
env: PROMETHEUS_PASSWORD
connection_settings:
authentication:
authorization:
type:
value: Bearer
credentials:
env: PROMETHEUS_AUTH_TOKEN
connection_settings:
authentication:
oauth2:
token_url:
value: http://example.com/oauth2/token
client_id:
env: PROMETHEUS_OAUTH2_CLIENT_ID
client_secret:
env: PROMETHEUS_OAUTH2_CLIENT_SECRET
The configuration accepts either the Google application credentials JSON string or file path. If the object is empty the client automatically loads the credential file from the GOOGLE_APPLICATION_CREDENTIALS
environment variable.
connection_settings:
authentication:
google:
# credentials:
# env: GOOGLE_APPLICATION_CREDENTIALS_JSON
# credentials_file:
# env: GOOGLE_APPLICATION_CREDENTIALS
runtime:
flat: false
unix_time_unit: s # enum: s, ms
format:
timestamp: rfc3339 # enum: rfc3339, unix
value: float64 # enum: string, float64
By default, values are grouped by the label set. If you want to flatten out the values array, set flat=true
.
If you use integer values for duration and timestamp fields the connector will transform them with this unix timestamp unit. Accept second (s
) and millisecond (ms
). The default unit is second.
These settings specify the format of response timestamp and value.
docker composes up -d
make generate-test-config
docker compose restart ndc-prometheus
make build-supergraph-test
docker compose up -d --build engine
Browse the engine console at http://localhost:3000.