You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What I have experienced is that if you, in a test, call receive_output when there is none available, you not only get a TimeoutError, but you cannot catch that exception and proceed, because the whole application has been cancelled; any further attempts to receive output, or even reconnect the communicator, will fail with CancelledError.
There is a question mark in the title because I certainly do not know if this is necessary in some other circumstance! But I don't see a reason to do this, hence raising an issue.
Assuming there is a reason for this then it may help to have some context on why I would want to receive_output when none is available: it's handy when you don't actually know how much output you're expecting, because you can do something like:
outputs = []
while True:
outputs.append(await communicator.receive_output())
...
Of course, there are checks that one could perform to see whether output is available; but maybe there's no need to. If there is a need, my tests are actually synchronous so performing them is more annoying and costly than it might seem :P
The text was updated successfully, but these errors were encountered:
At this point I don't recall why that's there; I can only presume my past self had some reasoning, but I don't know at this point and I'd have to go spend a few hours re-learning that specific code to see if there is something.
It's possible this was more necessary in prior Python versions, too; async in the 3.6 era needed a lot more workarounds.
Well this was the commit which added it daebab1 - don't know if you already checked that :P
I guess what I was hoping to understand was something a bit more general, though 6 years ago is a long time even for that - whether there was an underlying assumption that if you timed out waiting for output that ought to be fatal, and if so any rationale or anything around that.
I do notice from looking at that commit a second time that the same change is performed in wait() and in receive_output() - to me this makes more sense because you were waiting for the task to finish anyway. So maybe the behaviour could've been transferred to the receive_output case? But, my first assumption would still have been that you could wait with a timeout and then the task should be left alone if the timeout elapses.
Yes, I'm afraid that a one line commit message from 6 years ago does not restore my memory in this particular case. If you can prove it's not needed via some tests or something, I'd be happy to remove it.
I am referring to this part of the
receive_output
method:What I have experienced is that if you, in a test, call
receive_output
when there is none available, you not only get aTimeoutError
, but you cannot catch that exception and proceed, because the whole application has been cancelled; any further attempts to receive output, or even reconnect the communicator, will fail withCancelledError
.There is a question mark in the title because I certainly do not know if this is necessary in some other circumstance! But I don't see a reason to do this, hence raising an issue.
Assuming there is a reason for this then it may help to have some context on why I would want to
receive_output
when none is available: it's handy when you don't actually know how much output you're expecting, because you can do something like:Of course, there are checks that one could perform to see whether output is available; but maybe there's no need to. If there is a need, my tests are actually synchronous so performing them is more annoying and costly than it might seem :P
The text was updated successfully, but these errors were encountered: