Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

context deadline exceeded for junos_exporter container #174

Open
panks21 opened this issue Feb 13, 2022 · 1 comment
Open

context deadline exceeded for junos_exporter container #174

panks21 opened this issue Feb 13, 2022 · 1 comment

Comments

@panks21
Copy link

panks21 commented Feb 13, 2022

Hi
I need some help as I am not able to figure out the solution. I used portainer to deploy the stack using following docker-compose


version: "3"

volumes:
prometheus-data:
driver: local
grafana-data:
driver: local

services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090"
volumes:
- /etc/prometheus:/etc/prometheus
- prometheus-data:/prometheus
restart: unless-stopped
command:
- "--config.file=/etc/prometheus/prometheus.yml"

grafana:
image: grafana/grafana-oss:latest
container_name: grafana
ports:
- "4000:3000"
volumes:
- grafana-data:/var/lib/grafana
restart: unless-stopped

junos_exporter:
image: czerwonk/junos_exporter
container_name: junos_exporter
ports:
- 9326:9326
restart: unless-stopped
volumes:
- /root/junos_exporter_config.yml:/config.yml:ro


However prometheus shows the target as down all the time with error as "Get "http://junos_exporter:9326/metrics?target=10.2.14.254": context deadline exceeded"

prometheus.yml file is as below


global:

scrape_interval: 30s

scrape_timeout: 30s

scrape_configs:

  • job_name: "prometheus"
    static_configs:
    • targets: ["localhost:9090"]
  • job_name: 'junos'
    static_configs:
    • targets:
      • 10.2.14.254 # Target Device
        relabel_configs:
    • source_labels: [address]
      target_label: __param_target
    • source_labels: [__param_target]
      target_label: instance
    • target_label: address
      replacement: junos_exporter:9326 # The junos_exporter's real hostname:port

Even the curl to the host on port 9326 doesn't result in anything but the container is shown as healthy and no error logs present for the container

@4xoc
Copy link
Contributor

4xoc commented Jan 12, 2023

You probably have to increase your scrape_timeout. Depending on the hardware device a scrape takes a long time because the control plane cpu isn't too fast for the high number of cli commands than can result from using the exporter. You probably need to experiment a bit with different models to find out the largest stable timeout. Maybe use multiple jobs for different devices.

Couple of examples of timeout I know to be good (typically with most exporter features enabled):
MX480/MX960 = 30s
EX4200/EX4300/SRX300/SRX320/SRX340/SRX345 is < 2m

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants