You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On my workstation joining my many (some very large) MUCs with the demo works fine. On my old, underpowered chromebook it runs for a very long time and seems like it may never finish.
Main bottleneck: fetching CAPS. On every presence inbound we trigger a db read, followed by an iq out if not found, followed by a db write when we get the reply.
The expectation in a running client over time is that usually the db read finds something and we're done since caps are not super varied. However, on first join this has a thundering herd problem where no replies to the iq have been received yet and so db is still empty and we send many possibly redundant queries, which slows things down considerably.
Furthermore, we persist the whole Chat on every presence update. The writes on my chromebook are pretty slow. The following speeds it up significantly at the expense of possible data loss: if (mucUser == null || mucUser?.allTags("status")?.find((status) -> status.attr.get("code") == "110") != null) persistence.storeChat(accountId(), chat); trying with this and with caps fetching disabled I am able to get everything synced in demo on my chromebook in a somewhat reasonable amount of time.
With a MUC we probably need to do a full leave-join if we got cut off before we finished getting all presences so this loss only matters on smacks resume and would get rectified at next self-ping (which could detect that we don't have a 110 presence, so not fully joined, and do a rejoin to get the presence again).
Another option, less specific to MUC, with similar tradeoffs but helps on any presence flood from any source would be to throttle updates. This could be done at the SDK level, or if we think this is something the persistence layer might know more about where the best place is to put the tradeoff for itself we could do it inside implementations of storeChat as well. This is basically what Conversations does so there is precedent to this approach.
The text was updated successfully, but these errors were encountered:
On my workstation joining my many (some very large) MUCs with the demo works fine. On my old, underpowered chromebook it runs for a very long time and seems like it may never finish.
Main bottleneck: fetching CAPS. On every presence inbound we trigger a db read, followed by an iq out if not found, followed by a db write when we get the reply.
The expectation in a running client over time is that usually the db read finds something and we're done since caps are not super varied. However, on first join this has a thundering herd problem where no replies to the iq have been received yet and so db is still empty and we send many possibly redundant queries, which slows things down considerably.
Furthermore, we persist the whole
Chat
on every presence update. The writes on my chromebook are pretty slow. The following speeds it up significantly at the expense of possible data loss:if (mucUser == null || mucUser?.allTags("status")?.find((status) -> status.attr.get("code") == "110") != null) persistence.storeChat(accountId(), chat);
trying with this and with caps fetching disabled I am able to get everything synced in demo on my chromebook in a somewhat reasonable amount of time.With a MUC we probably need to do a full leave-join if we got cut off before we finished getting all presences so this loss only matters on smacks resume and would get rectified at next self-ping (which could detect that we don't have a 110 presence, so not fully joined, and do a rejoin to get the presence again).
Another option, less specific to MUC, with similar tradeoffs but helps on any presence flood from any source would be to throttle updates. This could be done at the SDK level, or if we think this is something the persistence layer might know more about where the best place is to put the tradeoff for itself we could do it inside implementations of
storeChat
as well. This is basically what Conversations does so there is precedent to this approach.The text was updated successfully, but these errors were encountered: