Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MUC join performance considerations #36

Open
singpolyma opened this issue Nov 11, 2024 · 1 comment
Open

MUC join performance considerations #36

singpolyma opened this issue Nov 11, 2024 · 1 comment

Comments

@singpolyma
Copy link
Collaborator

On my workstation joining my many (some very large) MUCs with the demo works fine. On my old, underpowered chromebook it runs for a very long time and seems like it may never finish.

Main bottleneck: fetching CAPS. On every presence inbound we trigger a db read, followed by an iq out if not found, followed by a db write when we get the reply.

The expectation in a running client over time is that usually the db read finds something and we're done since caps are not super varied. However, on first join this has a thundering herd problem where no replies to the iq have been received yet and so db is still empty and we send many possibly redundant queries, which slows things down considerably.

Furthermore, we persist the whole Chat on every presence update. The writes on my chromebook are pretty slow. The following speeds it up significantly at the expense of possible data loss: if (mucUser == null || mucUser?.allTags("status")?.find((status) -> status.attr.get("code") == "110") != null) persistence.storeChat(accountId(), chat); trying with this and with caps fetching disabled I am able to get everything synced in demo on my chromebook in a somewhat reasonable amount of time.

With a MUC we probably need to do a full leave-join if we got cut off before we finished getting all presences so this loss only matters on smacks resume and would get rectified at next self-ping (which could detect that we don't have a 110 presence, so not fully joined, and do a rejoin to get the presence again).

Another option, less specific to MUC, with similar tradeoffs but helps on any presence flood from any source would be to throttle updates. This could be done at the SDK level, or if we think this is something the persistence layer might know more about where the best place is to put the tradeoff for itself we could do it inside implementations of storeChat as well. This is basically what Conversations does so there is precedent to this approach.

@singpolyma
Copy link
Collaborator Author

I do see in the code there is some stuff to prevent the caps thundering herd, so maybe that was a red herring, not sure yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant