Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Make Kai RPC Server fail initialize call when model provider cannot be queried #614

Merged
merged 6 commits into from
Feb 5, 2025

Conversation

JonahSussman
Copy link
Contributor

Implements #609

As a follow up, perhaps the IDE should show a reason why the initialize call failed? Right now you have to look directly at the logs.

@JonahSussman JonahSussman marked this pull request as ready for review February 4, 2025 23:06
Copy link
Member

@djzager djzager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

visack w/ a few questions.

except Exception as e:
app.log.error("unable to get model provider:", e)
raise
server.shutdown_flag = True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does shutdown flag mean we are shutting down the server?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, though I'm open to having it stay on

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think the end goal would be:

  1. User tries to start the server with bad arguments via the IDE
  2. Initialize fails, returns an error, but kai keeps running.
  3. IDE tells the user what they did wrong.
  4. User fixes.
  5. User tries to start the server again. IDE recognizes kai is already started. IDE just sends 'initialize' request with updated params.
  6. Success.

kai/llm_interfacing/model_provider.py Show resolved Hide resolved

if self.cache:
if self.cache and cache_path_resolver:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JonahSussman you cannot call cache_path() twice, this is a shortcoming of the task based approach, because one task makes more than 1 requests to the LLM. we keep a count of the requests and it assumes when you call cache_path(), you are requesting a new cache so you get a new cache entry everytime.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should have added a comment cautioning about that

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All good, I'll revert the changes and add the comment

Signed-off-by: JonahSussman <[email protected]>
Signed-off-by: JonahSussman <[email protected]>
Signed-off-by: JonahSussman <[email protected]>
Signed-off-by: JonahSussman <[email protected]>
@JonahSussman JonahSussman merged commit 7cba2a0 into konveyor:main Feb 5, 2025
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants