-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fetching metadata on subscribe causing high memory on client and server #100
Comments
I got some allocs_space pprofs to work out why my servers were broken: liftbridge allocs profile
client profile:
|
Yes, this is an area for improvement I've had in mind. The client should only fetch the streams it needs. Also, the |
When subscribing to a stream, if the stream is not in the
metadataCache
the cache is completely updated resulting in theFetchMedata
RPC being called. This can happen if the stream is new or does not exist.go-liftbridge/v2/client.go
Line 1518 in c842e2f
We only create a stream when a message is published to save on unnecessary creates but this means that if multiple subscribers attempt to subscribe before the publisher then a lot of
FetchMetadata
RPCs are made. When there are 1000s (I had 3000 when I hit this) of streams in the liftbridge cluster the metadata gets very big and marshalling it so frequently caused one of my liftbridge servers to become unresponsive and all the memory on my client to be used up.Our liftbridge client service has liftbridge client connections to multiple liftbridge clusters so even storing the full metadata for each cluster is more memory than we would like. Keeping track of the brokers for a cluster is obviously necessary but do we need all the streams? Could individual stream partitions be fetched into the cache on demand?
The text was updated successfully, but these errors were encountered: