Constantly loses connection to LLM (both local and cloud) on long responses #4373
Labels
area:chat
Relates to chat interface
ide:jetbrains
Relates specifically to JetBrains extension
ide:vscode
Relates specifically to VS Code extension
kind:bug
Indicates an unexpected problem or unintended behavior
"needs-triage"
priority:high
Indicates high priority
Before submitting your bug report
Relevant environment info
Description
It just stopping receiving streamed response from LLM after about 400 lines of generated code and no any related errors in the plugin logs.
But In the logs of lmstudio server i'm seening this:
025-02-26 13:48:11 [INFO]
Finished streaming response
2025-02-26 13:48:11 [INFO]
[LM STUDIO SERVER] Client disconnected. Stopping generation... (If the model is busy processing the prompt, it will finish first.)
It looks like the plugin just closes connection for some reason.
Tried to increase timeout - it didn't help.
As the result it just returns a half of the file that being generated in that moment
To reproduce
The problem is reproduceable with both, with the local LLM and with the cloud DeepSeek so I don't think it related to my local LLM or to my internet connection. And I don't have any similar issues with any other tool I use.
Log output
The text was updated successfully, but these errors were encountered: