-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing nss metrics for channels and servers in Jetstream #216
Comments
Same here I also dont get any channel related metrics exported: ` exporter:
|
I have the exact same issue using These are the arguments I'm currently using: - args:
- -varz
- -connz
- -subz
- -routez
- -healthz
- -jsz=all
- -channelz
- -serverz
- http://nats:8222 |
|
Thanks for the clarification, but do you know if there are any plans to add jetstream metrics that expose messages per stream with jetstream? |
@wallyqs Thanks. Is there anyway we could extract messages per stream / subject on jetstream context? |
Any progress here ? We really need this metrics - how are they being activated ? @wallyqs |
Prometheus exporter uses HTTP monitoring endpoints of NATS, and those do not expose per-subject metrics.
It will provide paginated response.. Keep in mind that exposing that metric (for example in nats surveyor which has access to that API) is a big risk - many systems have millions of subjects in a stream and calculating that information frequently can be really taxing. We're thinking how we can make it better, but that is out of scope of prometheus exporter at this point. |
@Jarema I have two question related to your response:
|
|
@Jarema Thanks for the answers.
|
|
@ripienaar OK, but then comes my question - can jestream metrics get exposed and collected ? |
Did you pass the CLI flag to enable JetStream monitoring? |
Yep. This is my command in docker-compsoe:
|
Is that even running? There's several problems with this, like prometheus exporter doesn't accept --config as an option, and --js takes a flag. Probably it fails immediately with an error.
and elsewhere, its fetching all the stats.
|
These are the commands when I start nats, not the prometheus-nats-exproter. Here are both once again for clarity:
and there are no jetstream metrics as in your case above. Is it because of the way I run nats or because we are using a kv store (which internally uses a jetstream but still) ? |
My case above shows that lots of metrics for JetStream were gathered. The output from the curl command is a unique list of JetStream metrics that were gathered. Since KV does not tend to have consumers (unless you use watch) consumer related metrics won't really be there, here's a KV though:
|
Your docker compose file works fine..assuming you're adding streams or kv buckets, here I added one.
|
I am seeing htem to. Just wondering why I dont see the metrics defined here: https://github.com/nats-io/prometheus-nats-exporter/blob/main/walkthrough/grafana-jetstream-dash.json#L859 also, any examples of docker-compose and setup with nats-surveyor with jetstream ? |
It’s common for Prometheus exporters to only export data that has values. So if you have no consumers you get no metrics for consumers as an example. |
I have both producers and consumers - that is the thing. Both using put and watch for the kv stores. Looking at the dashboard as you will see in the pic above, I have nothing on the consumers side |
Could the metric names be outdated maybe ? hence the dashboard being old ? |
I've shown you how to look for what metrics are returned, you can easily find out with curl and grep for consumer. If you have consumers, then show them with consumer info so we can confirm they exist. Please try to provide more information than you are doing, we're here donating our free time to you, please try to help us help you. |
All good now. Thanks guys, checked and it was fine. Had to modify the dashboard a bit, but overall jetstrea_* metrics showed up for consumers. Ty! |
My goal is to extract the pending messages per subject like the following cli command:
Following the guidelines here I've run the exporter with the following command:
./prometheus-nats-exporter -channelz -serverz http://localhost:8222
but I'm not getting any of the nss.* metrics. those are the only metrics I get:
HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
HELP go_goroutines Number of goroutines that currently exist.
TYPE go_goroutines gauge
go_goroutines 16
HELP go_info Information about the Go environment.
TYPE go_info gauge
go_info{version="go1.17.13"} 1
HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 4.233328e+06
HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 4.233328e+06
HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.44722e+06
HELP go_memstats_frees_total Total number of frees.
TYPE go_memstats_frees_total counter
go_memstats_frees_total 0
HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 3.647192e+06
HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 4.233328e+06
HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 3.432448e+06
HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 4.268032e+06
HELP go_memstats_heap_objects Number of allocated objects.
TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 24728
HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 3.432448e+06
HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 7.70048e+06
HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 0
HELP go_memstats_lookups_total Total number of pointer lookups.
TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
HELP go_memstats_mallocs_total Total number of mallocs.
TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 24728
HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 19200
HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 32768
HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 63920
HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 65536
HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 5.275648e+06
HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.411324e+06
HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 655360
HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 655360
HELP go_memstats_sys_bytes Number of bytes obtained from system.
TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.495988e+07
HELP go_threads Number of OS threads created.
TYPE go_threads gauge
go_threads 11
HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0
HELP process_max_fds Maximum number of open file descriptors.
TYPE process_max_fds gauge
process_max_fds 1024
HELP process_open_fds Number of open file descriptors.
TYPE process_open_fds gauge
process_open_fds 13
HELP process_resident_memory_bytes Resident memory size in bytes.
TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.9111936e+07
HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
TYPE process_start_time_seconds gauge
process_start_time_seconds 1.68012116363e+09
HELP process_virtual_memory_bytes Virtual memory size in bytes.
TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.41858816e+09
HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 2
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
The text was updated successfully, but these errors were encountered: