You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We now have fastest-consumer (aka tokio style) from #229 but there seems to be demand from others as mentioned in python-trio/trio#987 for a non-lossy variant which performs strict backpressure on the laggers.
I had some brief notes in #229 (though not sure they're relevant any more after refining that patch):
If we wanted to also support slowest consumer style, I'm pretty sure there's an easy hack to just inherit and proxy through reads to trio._channel.MemoryChannelState.data (presuming the queue size to broadcast_receiver() and the underlying mem chan's deque are the same size).
If .data here proxied to lambda: max(self._subs.values()) and the order of these branches were reversed, it might just work no?
eliflen(self._state.data) >=self._state.max_buffer_size:
# causes ``.send()`` to blockraisetrio.WouldBlockelse:
self._state.data.append(value)
On further thought I think this may require a deeper investment to hook into the send side of the underlying receive channel to get this behavior without relying on implementation details of trio's builting mem chans.
Ideas from lurkers welcome.
The text was updated successfully, but these errors were encountered:
We now have fastest-consumer (aka
tokio
style) from #229 but there seems to be demand from others as mentioned in python-trio/trio#987 for a non-lossy variant which performs strict backpressure on the laggers.I had some brief notes in #229 (though not sure they're relevant any more after refining that patch):
On further thought I think this may require a deeper investment to hook into the send side of the underlying receive channel to get this behavior without relying on implementation details of
trio
's builting mem chans.Ideas from lurkers welcome.
The text was updated successfully, but these errors were encountered: