Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging (log_severity) inconsistent #8

Closed
mihakrumpestar opened this issue Jan 5, 2025 · 6 comments
Closed

Logging (log_severity) inconsistent #8

mihakrumpestar opened this issue Jan 5, 2025 · 6 comments

Comments

@mihakrumpestar
Copy link

Is your feature request related to a problem? Please describe.

My caddy config/setup seems to trigger UnmarshalCaddyfile quite often (every 30 seconds, 3 times per interval), and it gets logged, even tho I set log_severity warn in config. It seems that that happens here in code.

log

Describe the solution you'd like

If it is ok, I would prefer that log_severity would apply to all logging entities.

Additional context

Here is my Dockerfile (list of used plugins) and part of config settings:

# # Build # #
ARG CADDY_VERSION=2
FROM caddy:${CADDY_VERSION}-builder AS build-stage

# WAF
ADD https://github.com/fabriziosalmi/caddy-waf.git /caddy-waf

# Sablier
ADD https://github.com/sablierapp/sablier.git#v1.8.1 /sablier

RUN xcaddy build \
    --with github.com/yroc92/postgres-storage \
    --with github.com/caddy-dns/desec \
    --with github.com/lucaslorentz/caddy-docker-proxy/v2 \
    --with github.com/sablierapp/sablier/plugins/caddy=/sablier/plugins/caddy \
    --with github.com/fabriziosalmi/caddy-waf=/caddy-waf \
    --with github.com/hslatman/caddy-crowdsec-bouncer/http

# # Release # #
FROM caddy:${CADDY_VERSION}-alpine AS release-stage

# WAF
ADD https://git.io/GeoLite2-Country.mmdb GeoLite2-Country.mmdb

COPY --from=build-stage /usr/bin/caddy /usr/bin/caddy

CMD ["caddy", "docker-proxy"]
{
    	order waf first
}

...

	waf {
		# Anomaly threshold will block request if the score is => the threshold
		anomaly_threshold 5

		# Rate limiting: 1000 requests per 1 minute
		rate_limit 1000 1m

		# Rules and blacklists
		#rule_file rules.json
		#ip_blacklist_file ip_blacklist.txt
		#dns_blacklist_file dns_blacklist.txt

		# Country blocking (requires MaxMind GeoIP2 database)
		block_countries GeoLite2-Country.mmdb RU CN KP

		# Whitelist countries (requires MaxMind GeoIP2 database)
		# whitelist_countries GeoLite2-Country.mmdb US

		# Define actions based on severity
		severity critical block
		severity high block
		severity medium log
		severity low log

		# Set Log Severity
		log_severity warn # debug

		#Set Log JSON output
		log_json
	}
fabriziosalmi added a commit that referenced this issue Jan 5, 2025
Fix [this issue](#8).
@fabriziosalmi
Copy link
Owner

Should be fixed by using this approach :) Thank You to point me out to that!!

@mihakrumpestar
Copy link
Author

mihakrumpestar commented Jan 5, 2025

I am not sure which approach do you mean in the linked commit. I am using the code of the main branch, this (current latest commit) commit specifically.

I now added the rules too, and it seems they get reloaded too:

added rule logs

@mihakrumpestar
Copy link
Author

mihakrumpestar commented Jan 5, 2025

Would it be possible to prevent it from reloading, since most of us are (probably) using it in a docker container with these rules generated internally?

Dockerfile for reference:

ARG CADDY_VERSION=2

# # Deps # #

# WAF
FROM python:slim AS deps-stage

WORKDIR /caddy-waf

ADD https://github.com/fabriziosalmi/caddy-waf.git /caddy-waf

RUN pip install --no-cache-dir requests tqdm

RUN python3 get_owasp_rules.py
RUN python3 get_blacklisted_ip.py
RUN python3 get_blacklisted_dns.py

# # Build # #
FROM caddy:${CADDY_VERSION}-builder AS build-stage

# WAF
ADD https://github.com/fabriziosalmi/caddy-waf.git /caddy-waf

# Sablier
ADD https://github.com/sablierapp/sablier.git#v1.8.1 /sablier

RUN xcaddy build \
    --with github.com/yroc92/postgres-storage \
    --with github.com/caddy-dns/desec \
    --with github.com/lucaslorentz/caddy-docker-proxy/v2 \
    --with github.com/sablierapp/sablier/plugins/caddy=/sablier/plugins/caddy \
    --with github.com/fabriziosalmi/caddy-waf=/caddy-waf \
    --with github.com/hslatman/caddy-crowdsec-bouncer/http

# # Release # #
FROM caddy:${CADDY_VERSION}-alpine AS release-stage

# WAF
COPY --from=deps-stage /caddy-waf/rules.json owasp_rules.json
COPY --from=deps-stage /caddy-waf/ip_blacklist.txt ip_blacklist.txt
COPY --from=deps-stage /caddy-waf/dns_blacklist.txt dns_blacklist.txt
ADD https://git.io/GeoLite2-Country.mmdb GeoLite2-Country.mmdb

# Binary
COPY --from=build-stage /usr/bin/caddy /usr/bin/caddy

CMD ["caddy", "docker-proxy"]

@mihakrumpestar
Copy link
Author

Never mind, I built the container again today, and it is no longer logging those things. Don't really know what changed, but the issue seems resolved.

@LeVraiRoiDHyrule
Copy link

LeVraiRoiDHyrule commented Jan 30, 2025

Hi @mihakrumpestar

I found your dockerfile and you are using exactly the plugins that I am looking to use. I am a begginer with caddy docker proxy and I am having some trouble to configure all those plugins with docker proxy labels.

May I ask if it is possible to share parts your configuration as an example ?
I would be very interested to see what you configured as global config and as pe-subdomain config. Could it possible to share extracts for both, if you have them in docker proxy labels format ?

Thanks in advance for any answer !
Have a nice day !

@mihakrumpestar
Copy link
Author

Hi, here you go.

Notes

Note that my config is a bit more complex than it should be and the custom handler reverse-proxyshould definitely be simplified, but I didn't have the time to do so yet. So don't use it as is, modify it to your needs.

Also note that lucaslorentz/caddy-docker-proxy#694 just got broken, so it will take some time to merge.

To the good stuff

Here is my full Caddyfile:

{
	email {$ADMIN_EMAIL}

	crowdsec {
		api_url http://crowdsec:80
		api_key {$CROWDSEC_BOUNCER_KEY_CADDY}
		# enable_hard_fails
	}

	order crowdsec before reverse_proxy

	order waf before crowdsec

	# debug

	http_port 80
	https_port 443

	# local_certs

	grace_period 10s # For reloading

	storage postgres {
		dbname {$CADDY_DATABASE_NAME}
		host postgres
		port 5432
		sslmode disable
		disable_ddl false
		user {$CADDY_DATABASE_USERNAME}
		password {$CADDY_DATABASE_PASSWORD}
	}

	# If used behind another reverse-proxy or in docker swarm
	#servers {
	#	trusted_proxies static private_ranges
	#	#trusted_proxies_strict
	#	client_ip_headers X-Forwarded-For X-Real-IP
	#
	#	# Required for docker swarm ingress network (can't get it to work)
	#	#listener_wrappers {
	#	#	proxy_protocol {
	#	#		timeout 2s
	#	#		allow 0.0.0.0/0
	#	#		fallback_policy use
	#	#	}
	#	#}
	#}
}

:8080 {
	# Ping
	respond /health 200
}

# Expose admin endpoint
http://secretHostname:12019 {
	reverse_proxy localhost:2019 {
		header_up Host localhost:2019
	}
}

# AUTH

(forward-auth) {
	import wan

	forward_auth {args[:]} authelia:80 {
		uri /api/authz/forward-auth
		copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
	}
}

# REVERSE_PROXY

# Usage (to use @lan you have to import it): import reverse-proxy [protocol://address:port] @lan|[other matchers]|""
(reverse-proxy) {
	reverse_proxy {args[1]} {args[0]} {
		{args[2:]}

		header_up Host {http.request.host}
		header_up X-Real-IP {remote_host} # Some services want X-Real-IP header instead of the default X-Forwarded-For
	}
}

# Usage: import auth-group [group name] [other matcher with negated (raw, not @)]|""
(auth-group) {
	@auth-group {
		header_regexp Remote-Groups (?:^|,\s*){args[0]}(?:\s*,|$) 
		{args[1:]}
	}

	@not_auth-group {
		not header_regexp Remote-Groups (?:^|,\s*){args[0]}(?:\s*,|$) 
		{args[1:]}
	}
	respond @not_auth-group "Access denied" 403
}

# Using as part of reverse-proxy * [import]
# Usage: import tls-no-verify
(tls-no-verify) {
	transport http {
		tls_insecure_skip_verify
	}
}

# Has to be in route, as it needs to be ordered
# Usage: import sablier [group name] [display name]
(sablier) {
	sablier http://sablier:80 {
		group {args[0]}
		dynamic {
			display_name {args[1]}
		}
	}
}

# Usage: import wan
(wan) {
	@wan not remote_ip private_ranges
}

# Usage: import lan
(lan) {
	@lan remote_ip private_ranges

	@not_lan not remote_ip private_ranges
	respond @not_lan "Access denied" 403
}

# Use only on first domain definition
(default) {
	import certificate
	import security-headers
	import compression
	import internal-waf
	import backend-interceptor
	import abort-unhandled
}

# Internal
(certificate) {
	tls {
		dns desec {
			token {$CADDY_DESEC_TOKEN}
		}

		# Have to use split DNS here (as our internal DNS has overrides already)
		#resolvers 1.1.1.1

		protocols tls1.3
		propagation_delay 2m # By default it is 0, which causes the record not to be available yet to ACME,
		# It is this high as the records do't propagate that fast
	}
}

# Internal
(security-headers) {
	header {
		# Remove Server header (empty fingerprint signature)
		-Server

		# Permissions Policy (formerly Feature Policy)
		Permissions-Policy "interest-cohort=(), camera=(), microphone=(), geolocation=(), payment=(), usb=(), vr=()"

		# HTTP Strict Transport Security
		Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"

		# Content Type Options
		X-Content-Type-Options "nosniff"

		# CSP header
		#Content-Security-Policy "default-src 'self'; img-src *;" # Breaks Nextcloud

		# Robots Tag
		?X-Robots-Tag "none, noindex, nofollow, nosnippet, noarchive, notranslate, noimageindex"

		# Frame Options (Clickjacking protection), SAMEORIGIN is required for FIDO2 (Vaultwarden)
		X-Frame-Options "SAMEORIGIN"

		# XSS Protection
		X-XSS-Protection "1; mode=block"

		# Referrer Policy
		Referrer-Policy "same-origin"

		# Access Control
		?Access-Control-Allow-Methods "GET, OPTIONS, PUT"
		Access-Control-Max-Age "100"

		defer
	}
}

# Internal
(compression) {
	encode zstd gzip
}

# Internal
(internal-waf) {
	waf {
		# Anomaly threshold will block request if the score is => the threshold
		anomaly_threshold 5

		# Rate limiting: 1000 requests per 1 minute
		rate_limit {
			requests 1000
			window 1m
			cleanup_interval 5m
			match_all_paths true
		}

		# Rules and blacklists
		rule_file owasp_rules.json
		ip_blacklist_file ip_blacklist.txt
		dns_blacklist_file dns_blacklist.txt

		# Country blocking (requires MaxMind GeoIP2 database)
		block_countries GeoLite2-Country.mmdb RU CN KP

		# Whitelist countries (requires MaxMind GeoIP2 database)
		# whitelist_countries GeoLite2-Country.mmdb US

		# Set Log Severity
		log_severity error # info, warn, error, debug

		#Set Log JSON output
		log_json
		redact_sensitive_data
	}
}

# Internal
(backend-interceptor) {
	handle_errors {
		#log {
		#    output stderr
		#    format console
		#    level ERROR
		#}

		header Content-Type text/html
		respond <<HTML
			<!DOCTYPE html>
			<html>
				<head>
					<title>{err.status_code} {err.status_text}</title>
				</head>
				<body>
					<h1>{err.status_code} {err.status_text}</h1>
					<p>Something went wrong.</p>
					<p>{err.message}</p>
				</body>
			</html>
			HTML {err.status_code}
	}
}

# Internal
(abort-unhandled) {
	handle {
		abort
	}
}

Caddy docker-compose:

services:
  caddy:
    image: ${REGISTRY_DOMAIN}/caddy:${LABEL}
    networks:
      gateway:
        aliases:
          - secretHostname
      internal:
    ports:
      - target: 80
        published: 80
        mode: host # If this is not host, docker will use ingress network and that one can't forward real IP
      - target: 443
        published: 443
        mode: host
      - target: 443
        published: 443
        mode: host
        protocol: udp
    environment:
      TZ: ${TZ}
      CADDY_INGRESS_NETWORKS: gateway_${ENV}
      #CADDY_DOCKER_PROXY_SERVICE_TASKS: false # To use service names instead of IPs
      CADDY_DOCKER_CADDYFILE_PATH: /etc/caddy/Caddyfile
      CADDY_DOCKER_EVENT_THROTTLE_INTERVAL: 1s
      CADDY_DOCKER_SCAN_STOPPED_CONTAINERS: "true"

      ADMIN_EMAIL: ${ADMIN_EMAIL}
      CROWDSEC_BOUNCER_KEY_CADDY: ${CROWDSEC_BOUNCER_KEY_CADDY}
      CADDY_DATABASE_NAME: ${CADDY_DATABASE_NAME}
      CADDY_DATABASE_USERNAME: ${CADDY_DATABASE_USERNAME}
      CADDY_DATABASE_PASSWORD: ${CADDY_DATABASE_PASSWORD}
      CADDY_DESEC_TOKEN: ${CADDY_DESEC_TOKEN}
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    configs:
      - source: Caddyfile
        target: /etc/caddy/Caddyfile
    #healthcheck:
    #  test:
    #    [
    #      "CMD",
    #      "wget",
    #      "--no-verbose",
    #      "--tries=1",
    #      "--spider",
    #      "https://localhost:8080/health",
    #    ]
    #  interval: 5s
    #  timeout: 3s
    #  retries: 9999
    #  start_period: 5s
    deploy:
      mode: global
      labels:
        # MAIN_DOMAIN
        caddy_0: "*.${MAIN_DOMAIN} ${MAIN_DOMAIN}"
        caddy_0.import: default

        caddy_0.0_@base_domain: host ${MAIN_DOMAIN}
        caddy_0.0_handle: "@base_domain"
        caddy_0.0_handle.redir: https://dashboard.${MAIN_DOMAIN}{uri}

        caddy_0.1_@gateway: host gateway.${MAIN_DOMAIN}
        caddy_0.1_handle: "@gateway"
        caddy_0.1_handle.import: forward-auth @wan
        caddy_0.1_handle.respond: OK

        homepage.group: Infrastructure
        homepage.name: Caddy
        homepage.icon: caddy.png
        homepage.href: https://gateway.${MAIN_DOMAIN}
        homepage.description: Reverse proxy
        homepage.widget.type: caddy
        homepage.widget.url: http://secretHostname:12019

        kuma.caddy.http.name: Caddy
        kuma.caddy.http.parent_name: infrastructure
        kuma.caddy.http.url: http://caddy:8080/health

        kuma.caddy-certificate.http.name: Caddy certificate
        kuma.caddy-certificate.http.parent_name: infrastructure
        kuma.caddy-certificate.http.url: https://gateway.${MAIN_DOMAIN}

  postgres:
    image: postgres:17-alpine
    networks:
      - internal
    volumes:
      - postgres:/var/lib/postgresql/data:rwZ
    environment:
      TZ: ${TZ}
      POSTGRES_DB: ${CADDY_DATABASE_NAME}
      POSTGRES_PASSWORD: ${CADDY_DATABASE_PASSWORD}
      POSTGRES_USER: ${CADDY_DATABASE_USERNAME}
    healthcheck:
      interval: 10s
      retries: 10
      test: pg_isready -U ${CADDY_DATABASE_USERNAME} -d ${CADDY_DATABASE_NAME}
      timeout: 2s

configs:
  Caddyfile:
    template_driver: golang
    name: Caddyfile_${HASH}
    file: ./Caddyfile.prod

networks:
  gateway:
    name: gateway_${ENV}
    external: true
  internal:
    driver_opts:
      encrypted: "true"

volumes:
  postgres:
    driver: local
    driver_opts:
      type: none
      device: /mnt/data/containers/${ENV}/caddy/postgres
      o: bind

And some example usage (each label is for a different service):

      labels:
        caddy: "*.${MAIN_DOMAIN} ${MAIN_DOMAIN}"
        caddy.@ddns-updater: host ddns-updater.${MAIN_DOMAIN}
        caddy.handle: "@ddns-updater"
        caddy.handle.0_import: forward-auth
        caddy.handle.1_import: auth-group admins
        caddy.handle.2_import: reverse-proxy ddns-updater:80 @auth-group

      labels:
        caddy: "*.${MAIN_DOMAIN} ${MAIN_DOMAIN}"
        caddy.@gitea: host gitea.${MAIN_DOMAIN}
        caddy.handle: "@gitea"
        caddy.handle.import: reverse-proxy gitea:80 *

      labels:
        caddy: "*.${MAIN_DOMAIN} ${MAIN_DOMAIN}"
        caddy.@gatus: host status.${MAIN_DOMAIN}
        caddy.handle: "@gatus"
        caddy.handle.0_import: forward-auth
        caddy.handle.1_import: reverse-proxy gatus:80 *

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants