Initial implmentation of a Prometheus exporter for Klipper to capture operational metrics. This is a very rough first implementation and subject to potentially significant changes.
Implementation is based on the Multi-Target Exporter Pattern
to enabled a single exporter to collect metrics from multiple Klipper instances.
Metrics for the exporter itself are served from the /metrics
endpoint and Klipper
metrics are serviced from the /probe
endpoint with a specified target
.
To start the Prometheus Klipper Exporter from the command line
$ prometheus-klipper-exporter
INFO[0000] Beginning to serve on port :9101
Then add a Klipper job to the Prometheus configuration file /etc/prometheus/prometheus.yml
scrape_configs:
- job_name: "klipper"
scrape_interval: 5s
metrics_path: /probe
static_configs:
- targets: [ 'klipper.local:7125' ]
params:
modules: [
"process_stats",
"job_queue",
"system_info",
"network_stats",
"directory_info",
"printer_objects",
"history",
]
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: klipper-exporter.local:9101
# optional exporter metrics
- job_name: "klipper-exporter"
scrape_interval: 5s
metrics_path: /metrics
static_configs:
- targets: [ 'klipper-exporter.local:9101' ]
The exporter can be run on the host running Klipper, or on a separate machine.
Replace klipper.local
with the hostname or IP address of the Klipper host,
and replace klipper-exporter.local
with the hostname or IP address of the host
runnging prometheus-klipper-exporter
.
To monitor multiple Klipper instances add multiple entries to the
static_config
.targets
for the klipper
job. e.g.
...
static_configs:
- targets: [ 'klipper1.local:7125', 'klipper2.local:7125 ]
...
$ make build
You will typically want to install the exporter binary on a host that will be constantly running, either the Klipper host iteself, or a separate server, and ensure that the process restarts on system restart.
Example installation on Raspberry Pi, using systemd to run the exporter.
$ ssh [email protected]
[klipper]$ mkdir /home/pi/klipper-exporter
[klipper]$ exit
$ scp prometheus-klipper-exporter [email protected]:/home/pi/klipper-exporter
$ scp klipper-exporter.service [email protected]:/home/pi/
$ ssh [email protected]
[klipper]$ sudo mv klipper-exporter.service /etc/systemd/system/
[klipper]$ sudo systemctl daemon-reload
[klipper]$ sudo systemctl enable klipper-exporter.service
[klipper]$ sudo systemctl start klipper-exporter.service
[klipper]$ sudo systemctl status klipper-exporter.service
[klipper]$ exit
To run the exporter as a docker container.
$ docker run -d -p 9101:9101 ghcr.io/scross01/prometheus-klipper-exporter:latest
See the example/README.md for a complete example running Prometheus, Grafana, and the klipper-exporter in Docker using docker compose.
You can configure different sets of metrics to be collected by including the
modules
parameter in the prometheus.yml
configuration file.
...
params:
modules: [ "process_stats", "job_queue", "system_info" ]
...
If the modules params are omitted then only the default metrics are collected. Each group of metrics is queried from a different Moonraker API endpoint.
module | default | metrics |
---|---|---|
process_stats |
x | klipper_moonraker_cpu_usage klipper_moonraker_memory_kb klipper_moonraker_websocket_connections klipper_system_cpu klipper_system_cpu_temp klipper_system_memory_available klipper_system_memory_total klipper_system_memory_used klipper_system_uptime |
network_stats |
klipper_network_tx_bandwidth{interface=" interface"} klipper_network_rx_bytes{interface=" interface"} klipper_network_tx_bytes{interface=" interface"} klipper_network_rx_drop{interface=" interface"} klipper_network_tx_drop{interface=" interface"} klipper_network_rx_errs{interface=" interface"} klipper_network_tx_errs{interface=" interface"} klipper_network_rx_packets{interface=" interface"} klipper_network_tx_packets{interface=" interface"} |
|
job_queue |
x | klipper_job_queue_length |
system_info |
x | klipper_system_cpu_count |
directory_info |
klipper_disk_usage_available klipper_disk_usage_total klipper_disk_usage_used |
|
printer_objects |
klipper_extruder_power klipper_extruder_pressure_advance klipper_extruder_smooth_time klipper_extruder_target klipper_extruder_temperature klipper_fan_rpm klipper_fan_speed klipper_gcode_extrude_factor klipper_gcode_position_x klipper_gcode_position_y klipper_gcode_position_z klipper_gcode_speed_factor klipper_gcode_speed klipper_heater_bed_power klipper_heater_bed_target klipper_heater_bed_temperature klipper_mcu_awake klipper_mcu_clock_frequency klipper_mcu_invalid_bytes klipper_mcu_read_bytes klipper_mcu_ready_bytes klipper_mcu_receive_seq klipper_mcu_retransmit_bytes klipper_mcu_retransmit_seq klipper_mcu_rto klipper_mcu_rttvar klipper_mcu_send_seq klipper_mcu_stalled_bytes klipper_mcu_srtt klipper_mcu_write_bytes klipper_output_pin_value{pin=" pin"} klipper_printing_time klipper_print_filament_used klipper_print_file_position klipper_print_file_progress klipper_print_gcode_progress klipper_print_total_duration klipper_temperature_fan_speed{fan=" fan"} klipper_temperature_fan_temperature{fan=" fan"} klipper_temperature_fan_target{fan=" fan"} klipper_temperature_sensor_temperature{sensor=" sensor"} klipper_temperature_sensor_measured_max_temp{sensor=" sensor"} klipper_temperature_sensor_measured_min_temp{sensor=" sensor"} klipper_toolhead_estimated_print_time klipper_toolhead_max_accel_to_decel klipper_toolhead_max_accel klipper_toolhead_max_velocity klipper_toolhead_print_time klipper_toolhead_square_corner_velocity |
|
history |
klipper_current_print_first_layer_height klipper_current_print_layer_height klipper_current_print_object_height klipper_current_print_total_duration klipper_longest_job klipper_longest_print klipper_total_filament_used klipper_total_jobs klipper_total_print_time klipper_total_time |
The simplest deployment option is to run the Klipper Exporter on a host that is in
the Moonraker trusted clients configuration. This is typically configured by default
to include all hosts in the local network. If you have a more restrictive configuration
then add the host to the moonraker.conf
[authorization]
configuration section.
# moonraker.conf
[authorization]
trusted_clients:
klipper-exporter.local
...
Untrusted clients must use an API key to access Moonraker's HTTP APIs. To fetch the current API key run the following on the Klipper host:
$ cd ~/moonraker/scripts
$ ./fetch-apikey.sh
abcdef01234567890123456789012345
The API key can be set in one of three ways, from the scrape job configuraion in
prometheus.yml
, using the -moonraker-apikey
command line argument, or
setting the MOONRAKER_APIKEY
environment variable.
Set in the MOONRAKER_APIKEY
environment variable.
$ export MOONRAKER_APIKEY='abcdef01234567890123456789012345'
$ prometheus-klipper-exporter
Set on the klipper exporter command line using -moonraker.apikey
option.
$ prometheus-klipper-exporter -moonraker.apikey='abcdef01234567890123456789012345'
Add the API key to the prometheus.yml
scrape config, Add authorization
configuration with the type set to APIKEY
. The key can either to set directly
in the config or referenced from file.
- job_name: "klipper"
...
authorization:
type: APIKEY
credentials: 'abcdef01234567890123456789012345'
# credentials_file: /path/to/private/apikey.txt
...
Only one API key can be set for each job. If you have multiple klipper hosts with different API keys, create a separate job for each host.
-help
Display the command line help.
-logging.level <level>
Set the logging output verbosity to one of Trace
, Debug
, Info
,
Warning
, Error
, Fatal
and Panic
. Default level is Info
which will
log anything that is info level or above (warning, error, fatal, panic).
-moonraker.apikey <string>
Set the API Key to authenticate with the Klipper APIs. See API Key Authentication
-web.listen-address [<ip_address>]:<port>
Address on which to expose metrics and web interface. Default is :9101
which will listen on port 9101
on all interfaces, which is the equiviment
of 0.0.0.0:9101
. Include the IP address to limit to listening on a specific
interface, e.g. 192.168.1.99:7070
.
v0.8.0
deprecates the tempurature
module option which contains a subset of
the metrics reported by the printer_objects
. If you where using the
tempurature
module then switch the configuration to use printer_objects
instead.
The v0.7.0
release introduces several metric changes that will break any
grafana charts that have previously been defined using the old metric names from
v0.6.x or earlier.
v0.7.0
now uses labels for network interfaces, temperature sensors,
temperature fans, and output pins, rather than defining separate metrics
for each unique entity.
These changes affect the following metrics groups
klipper_network_
*klipper_temperature_sensor_
*klipper_temperature_fan_
*klipper_output_pin_
*
For example:
klipper_network_
wlan0
_rx_bytes
becomesklipper_network_rx_bytes{interface="
wlan0
"}
klipper_temperature_sensor_
mtu
_temperature
becomesklipper_temperature_sensor_temperature{sensor="
mtu
"}