Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High memory usage in l4 matcher #284

Open
divyam234 opened this issue Jan 9, 2025 · 8 comments
Open

High memory usage in l4 matcher #284

divyam234 opened this issue Jan 9, 2025 · 8 comments
Labels
bug Something isn't working

Comments

@divyam234
Copy link

divyam234 commented Jan 9, 2025

image

Postgres matcher is taking very high memory even when I am not using it which eventually kills caddy due to OOM. During profiling I have called only imgproxy with 20 concurrent requests .

{
    servers {
        listener_wrappers {
            layer4 {
                @postgres postgres
                route @postgres {
                  proxy postgres:5432
                }
                @redis regexp ^[+\-$:_#,!=\(\*\%\~\|\>]$ 1
                route @redis {
                    proxy redis:6379
                }
                route
            }
            tls
        }
    }
}
:80 {
    reverse_proxy /debug/pprof/* localhost:2019 {
	  header_up Host {upstream_hostport}
  }

imgproxy.example.com {
  tls internal
  reverse_proxy imgproxy:8080
}

}
@divyam234 divyam234 changed the title High memory usage in postgres matcher High memory usage in l4 matcher Jan 9, 2025
@mholt mholt added the bug Something isn't working label Jan 9, 2025
@mholt
Copy link
Owner

mholt commented Jan 9, 2025

I wonder if @metafeather would be able to look into it 😃

@mholt
Copy link
Owner

mholt commented Jan 9, 2025

Actually, a closer look at that profile tree shows that the memory sum is already over 99% -- do you have the rest of the tree? Maybe try svg as well to get the full picture. (Sorry, @metafeather -- might not be a leak in the postgres matcher!)

@divyam234
Copy link
Author

divyam234 commented Jan 9, 2025

@mholt
Copy link
Owner

mholt commented Jan 9, 2025

Hmm, well, maybe not so much a leak as it is just lack of pooled buffers. @metafeather Any interest in using sync.Pool?

@divyam234 How many of your connections are Postgres? What if you reorder the matchers so it's not first?

@divyam234
Copy link
Author

Reordering results in same result.Its only fixed after removing postgres block.During profiling there were no postgres connection.

@mholt
Copy link
Owner

mholt commented Jan 10, 2025

Busy server, I'm guessing?

My first thought would be we could try pooling the buffers in that matcher.

@vnxme
Copy link
Collaborator

vnxme commented Jan 22, 2025

If we look closer on the postgres matcher code, we can see here that, for each matching iteration, it reads 4 bytes, then up to 2^32-4 bytes, then processes some of these bytes to decide whether it is postgres or not.

I wonder if we can rewrite this matcher to consume fewer bytes or have fewer loops. There is also a for loop here that doesn't look good to me at first sight, especially given the fact this matcher returns true if at least one "startup parameter" has been found. But I don't have any postgres setup to test it properly.

@mholt
Copy link
Owner

mholt commented Jan 23, 2025

OH. I didn't realize that at a quick glance. Duh. Yeah, we can probably do better here... (I also don't really use Postgres.) @metafeather Any ideas on how we can improve this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants