-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bi streaming no debugger stuff #219
Conversation
e084925
to
d130f2c
Compare
@chrizzFTD if you've got a second I would sure love to know why these tests hang 😂 |
25938fe
to
83dd8e4
Compare
This mostly adds the api described in #53 (comment) The first draft summary: - formalize bidir steaming using the `trio.Channel` style interface which we derive as a `MsgStream` type. - add `Portal.open_context()` which provides a `trio.Nursery.start()` remote task invocation style for setting up and tearing down tasks contexts in remote actors. - add a distinct `'started'` message to the ipc protocol to facilitate `Context.start()` with a first return value. - for our `ReceiveMsgStream` type, don't cancel the remote task in `.aclose()`; this is now done explicitly by the surrounding `Context` usage: `Context.cancel()`. - streams in either direction still use a `'yield'` message keeping the proto mostly symmetric without having to worry about which side is the caller / portal opener. - subtlety: only allow sending a `'stop'` message during a 2-way streaming context from `ReceiveStream.aclose()`, detailed comment with explanation is included. Relates to #53
Add clear teardown semantics for `Context` such that the remote side cancellation propagation happens only on error or if client code explicitly requests it (either by exit flag to `Portal.open_context()` or by manually calling `Context.cancel()`). Add `Context.result()` to wait on and capture the final result from a remote context function; any lingering msg sequence will be consumed/discarded. Changes in order to make this possible: - pass the runtime msg loop's feeder receive channel in to the context on the calling (portal opening) side such that a final 'return' msg can be waited upon using `Context.result()` which delivers the final return value from the callee side `@tractor.context` async function. - always await a final result from the target context function in `Portal.open_context()`'s `__aexit__()` if the context has not been (requested to be) cancelled by client code on block exit. - add an internal `Context._cancel_called` for context "cancel requested" tracking (much like `trio`'s cancel scope). - allow flagging a stream as terminated using an internal `._eoc` flag which will mark the stream as stopped for iteration. - drop `StopAsyncIteration` catching in `.receive()`; it does nothing.
Adds a new hard kill routine for the `trio` spawning backend.
Looking at this last run it appears as though it's the discovery tests causing the hang now? I think we need a manual tester to come in and check it out. |
f5af130
to
4157d98
Compare
fbcd253
to
929b6dc
Compare
Ok going to increment in from #220 onto the |
As per whatever dark magic is going on in this patch set this should solve windows CI hangs? |
Lol true 🤯 right there. |
Aight, gonna consider this a clean run since the one flaky test is just in-determinism that'll be removed in #220 😎 Last couple things I'm hoping to get in here:
|
47b0d32
to
a35b938
Compare
a35b938
to
240f591
Compare
@@ -378,10 +473,24 @@ async def _stream_handler( | |||
try: | |||
await self._process_messages(chan) | |||
finally: | |||
|
|||
# channel cleanup sequence |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh dang I guess we can drop this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh wait, no this is a reminder comment of sorts.
This is a first draft of a bidirectional streaming api as discussed in #53.
This specific PR is a clone of the original #209 but with debugger changes factored out to see if we can get a clean CI run as well as a smaller patch set for reviewers. There is a follow up PR #220 which includes the debugger improvements that now use and rely on this new api.
Critique and suggestions very welcome from lurkers 😎.
continued task list from #209:
_debug.py
remote tty locking to context api.async for stream
while sender is running independently in another trio task[ ] should we add broadcasting support here for We need broadcast channels, stat. #204 ?let's keep it separate./api.html#anyio.streams.stapled.StapledObjectStream) (obvs we won't use
*Object*
since we'remsgpack
type contrained). Actual code is here.faust
andray
which (i think) are unidirectional only?the following were deferred to #223
if not self._ctx._portal
checking insideReceiveMsgStream.aclose()
SendMsgStream
/ReceiveMsgStream
type set and staple them together using a channel/messaging equivalent oftrio.StapledStream
? I'm thinking we'll make all streams aMsgStream
and just don't allow send on receive only (which should be minority use case i'd imagine eventually).stream
arg to@stream
funcs aSendMsgStream
instead of aContext
anyio
has something similar:StapledObjectStream
](https://anyio.readthedocs.io/en/stableoutstanding windows hangs CI work, leak-over from #209
[ ] comment from #209 suggests it might have been the mutlti-task producer task, but removing that on 83dd8e4 shows it's still not working...it was some wackylogging
thing 🙄