-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Migrated JSON RPC server to use asyncio and added cancellation support #624
Conversation
Closing for now, will re-open once we determine it's time to move to asyncio. |
Signed-off-by: JonahSussman <sussmanjonah@gmail.com>
Signed-off-by: JonahSussman <sussmanjonah@gmail.com>
Signed-off-by: JonahSussman <sussmanjonah@gmail.com>
Signed-off-by: JonahSussman <sussmanjonah@gmail.com>
Signed-off-by: JonahSussman <sussmanjonah@gmail.com>
Signed-off-by: JonahSussman <sussmanjonah@gmail.com>
bd19c4a
to
8ad8666
Compare
Signed-off-by: JonahSussman <sussmanjonah@gmail.com>
Signed-off-by: JonahSussman <sussmanjonah@gmail.com>
Signed-off-by: JonahSussman <sussmanjonah@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the removal of tracing, I'd feel comfortable if we at least had trace logs for important functions at the start and end. A few questions...
self.args.append("-depOpenSourceLabelsFile") | ||
self.args.append(str(dep_open_source_labels_path)) | ||
|
||
self.excluded_paths = excluded_paths |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is going on with excluded_paths here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do believe that we are setting the defaults in __init__
# StreamReader's limit, it will just straight up not repopulate the | ||
# buffer with fresh data. Thus, we need to read in chunks. | ||
left = content_length | ||
chunk_size = 1024 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isnt limit something like 64KiB? 1024 sounds too low
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Experimentally, I was having consistency issues with anything higher than 1024
if self.recv_file.closed: | ||
while True: | ||
try: | ||
result += await reader.readuntil(b"\n") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldnt this reader have the same problem as above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No because once a stream is exhausted, the StreamReader will reliably return EOF https://docs.python.org/3/library/asyncio-stream.html#asyncio.StreamReader
Unfortunately, this PR is pretty large because it has to be.
Why move to
asyncio
? We could only handle processing one incoming request or notification at a time. This was undesirable for many reasons, one of them being bi-directional communication with the IDE. Now we can! Take a look attests/test_jsonrpc.py
for an example of it succeeding. Hopefully the server can do more useful stuff while waiting around for the LLM calls.Also added a
getCodeplanAgentSolution.cancel
to cancel current requestTodos:
getCodeplanAgentSolution
in one big lock.