-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llm metadata fix #1836
llm metadata fix #1836
Conversation
WalkthroughThe changes modify the Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Caller
participant LLMProcessor as LLMProcessor
Client->>LLMProcessor: Call fetch_llm_metadata(bot)
LLMProcessor->>LLMProcessor: Initialize metadata & final_metadata
LLMProcessor->>LLMProcessor: For each LLM type, check if models list is non-empty
LLMProcessor->>LLMProcessor: Add valid LLM type info to final_metadata
LLMProcessor-->>Client: Return final_metadata
Possibly related PRs
Poem
Tip CodeRabbit's docstrings feature is now available as part of our Pro Plan! Simply use the command ✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
tests/integration_test/services_test.py (1)
1596-1598
: Remove commented out code.The new assertion correctly verifies that "anthropic" is not included in the metadata, which aligns with the changes made to
fetch_llm_metadata
. However, the commented-out assertions should be removed rather than left in the codebase to maintain cleanliness.assert "anthropic" not in actual["data"] -#assert "model" in actual["data"]["anthropic"]["properties"] -#assert actual["data"]["anthropic"]["properties"]["model"]["enum"] == []kairon/shared/llm/processor.py (1)
412-417
: Update the function's docstring to reflect the new behavior.The docstring should be updated to clarify that the function now only returns metadata for LLM types that have associated models, not all LLM types.
@staticmethod def fetch_llm_metadata(bot: str): """ Fetches the llm_type and corresponding models for a particular bot. :param bot: bot id - :return: dictionary where each key is a llm_type and the value is a list of models. + :return: dictionary where each key is a llm_type with non-empty models list and the value is the LLM metadata. """
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
kairon/shared/llm/processor.py
(2 hunks)tests/integration_test/services_test.py
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Python CI
- GitHub Check: Analyze (python)
🔇 Additional comments (2)
kairon/shared/llm/processor.py (2)
420-434
: Filtering out LLM types with empty models - good optimization!This change ensures that only LLM types with available models are included in the returned metadata, which is a good optimization to avoid unnecessary data in the response.
435-435
: Return value change looks good.The change to return
final_metadata
instead ofmetadata
is consistent with the optimization to only include LLM types with models.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved
Summary by CodeRabbit