Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added support for custom body text for whatsapp converter button #1839

Merged
merged 3 commits into from
Mar 5, 2025

Conversation

hasinaxp
Copy link
Contributor

@hasinaxp hasinaxp commented Mar 3, 2025

Summary by CodeRabbit

  • New Features

    • Modified WhatsApp message formatting so that when a label is available, its text is correctly displayed. This improvement ensures that responses reflect the intended message content more accurately.
    • Enhanced response structure in the prediction process to include additional context information alongside the generated answer.
  • Bug Fixes

    • Improved validation of the llm_response_log structure in test cases to ensure proper handling of the context.
  • Refactor

    • Cleaned up unused import statements in unit tests to streamline the codebase.

Copy link

coderabbitai bot commented Mar 3, 2025

Walkthrough

The changes introduce a conditional check within the button_transformer method of the WhatsappResponseConverter class in kairon/chat/converters/channels/whatsapp.py. The method now checks if an item's type equals 'label' and, if true, updates the body_msg variable with a dictionary containing the item’s text. Additionally, the predict method in the LLMProcessor class in kairon/shared/llm/processor.py has been modified to include a new key, "context", in the response dictionary. The test files have been updated to handle the new response structure and clean up unused imports. No modifications were made to the exported or public entity declarations.

Changes

File(s) Change Summary
kairon/chat/converters/channels/whatsapp.py Added a conditional check in the button_transformer method to update body_msg when item type is 'label'.
kairon/shared/llm/processor.py Modified the predict method to include a new key, "context", in the response dictionary.
tests/integration_test/action_service_test.py Added checks for llm_response_log in test cases to ensure proper handling of context.
tests/unit_test/llm_test.py Removed unused import statements to clean up the test file.

Possibly related PRs

  • Format text converter channels #1709: The changes in the main PR, specifically the modification to the button_transformer method in the WhatsappResponseConverter class, are related to the changes in the retrieved PR, which also involves the WhatsappResponseConverter class with the addition of the paragraph_transformer method, as both modify the message handling logic within the same class.

Suggested reviewers

  • hiteshghuge

Poem

I'm a bunny with a curious mind,
Hopping through code where fixes we find.
A golden little check amid the flow,
Making message crafting beautifully glow.
Cheers to changes, so crisp and neat,
With every hop, our code feels complete!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
kairon/chat/converters/channels/whatsapp.py (4)

74-75: Handle potential missing text in label items

The added code correctly allows for custom body text for WhatsApp converter buttons. However, it doesn't handle the case where item.get("text") might return None.

I recommend adding a check to ensure text exists before using it:

if item.get("type") == 'label':
-    body_msg = {"text": item.get("text")}
+    label_text = item.get("text")
+    if label_text is not None:
+        body_msg = {"text": label_text}

74-75: Add documentation about the new label feature

While the implementation is straightforward, there's no documentation or comments explaining this new feature, its purpose, or how to structure inputs to take advantage of it.

I recommend adding a comment above or within the method to explain how the label type works:

def button_transformer(self, message):
    try:
        message_template = ElementTransformerOps.getChannelConfig(self.channel, self.message_type)
        button_json_temp = json.loads(message_template)
        jsoniterator = ElementTransformerOps.json_generator(message)
        buttons = {"buttons": []}
        body_default = ElementTransformerOps.getChannelConfig(self.channel, "body_message")
        body_msg = {"text": body_default}
+        # Process message items, including custom labels for body text and button elements
        for item in jsoniterator:

74-83: Consider refactoring repeated conditional logic

The implementation pattern for 'label' and 'button' types follows a similar structure - checking for a specific type and then updating a related variable. As more item types are supported, this pattern might become unwieldy.

For better maintainability, consider using a strategy pattern or a dispatch dictionary:

def button_transformer(self, message):
    try:
        message_template = ElementTransformerOps.getChannelConfig(self.channel, self.message_type)
        button_json_temp = json.loads(message_template)
        jsoniterator = ElementTransformerOps.json_generator(message)
        buttons = {"buttons": []}
        body_default = ElementTransformerOps.getChannelConfig(self.channel, "body_message")
        body_msg = {"text": body_default}
+        # Process each item based on its type
        for item in jsoniterator:
-            if item.get("type") == 'label':
-                body_msg = {"text": item.get("text")}
-            if item.get("type") == ElementTypes.BUTTON.value:
-                title = ElementTransformerOps.json_generator(item.get("children"))
-                for titletext in title:
-                    button_text = titletext.get("text")
-                btn_body = {}
-                btn_body.update({"type": "reply"})
-                btn_body.update({"reply": {"id": item.get("value"), "title": button_text}})
-                buttons["buttons"].append(btn_body)
+            item_type = item.get("type")
+            if item_type == 'label':
+                label_text = item.get("text")
+                if label_text is not None:
+                    body_msg = {"text": label_text}
+            elif item_type == ElementTypes.BUTTON.value:
+                title = ElementTransformerOps.json_generator(item.get("children"))
+                button_text = next((titletext.get("text") for titletext in title), None)
+                if button_text:
+                    btn_body = {
+                        "type": "reply",
+                        "reply": {"id": item.get("value"), "title": button_text}
+                    }
+                    buttons["buttons"].append(btn_body)

74-75: Consider implementing unit tests for the new feature

The PR adds new functionality (customizable button body text) but doesn't appear to include tests.

I recommend adding unit tests that verify:

  1. Default body message is used when no label is present
  2. Custom label text is used when available
  3. The last label is used if multiple labels are present

Would you like me to help draft these tests?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 56af53c and 566e338.

📒 Files selected for processing (1)
  • kairon/chat/converters/channels/whatsapp.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: Python CI
  • GitHub Check: Analyze (python)
🔇 Additional comments (1)
kairon/chat/converters/channels/whatsapp.py (1)

74-75:

❓ Verification inconclusive

Consider handling multiple label items

Currently, if multiple items have type 'label', only the last one's text will be used as the body message since each occurrence will overwrite the previous body_msg value.

Is this the intended behavior? If not, you might want to:

  1. Use only the first label found
  2. Combine multiple labels
  3. Add documentation specifying that only the last label is used

🏁 Script executed:

#!/bin/bash
# Check if there are any usage examples in the codebase that might help understand if 
# multiple labels in one message is a valid scenario

echo "Searching for examples of WhatsApp button message construction..."
rg -A 3 -B 3 "type.*label" --glob "*.py" --glob "*.json" 
echo "Searching for test cases related to WhatsappResponseConverter button_transformer..."
rg -A 3 -B 3 "WhatsappResponseConverter.*button_transformer" --glob "*test*.py"

Length of output: 1205


Multiple label handling in WhatsApp conversion logic
The code currently updates body_msg each time an item with type 'label' is encountered, meaning if there are multiple such items in the message, only the last one will be kept. Our search did not reveal any test cases or usage examples that clarify whether multiple labels should be combined or if only one label is expected. Please verify the intended design. Consider one of the following actions:

  • If only one label is expected:

    • Document the behavior clearly to indicate that only the last label is used, or
    • Update the logic to pick the first label and ignore subsequent ones if that meets requirements.
  • If multiple labels might occur:

    • Implement logic to combine them, or
    • Add test cases illustrating the desired behavior.

File: kairon/chat/converters/channels/whatsapp.py (Lines 74-75)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
kairon/shared/llm/processor.py (1)

133-133: Consider potential API compatibility concerns.

You've added a new 'context' key to the response dictionary in the success case, which exposes the context data to consumers of this API. This change has two potential issues:

  1. API consumers might not be expecting this additional field, which could cause compatibility issues.
  2. The error response on line 141 doesn't include a 'context' key, creating an inconsistent response structure between success and error cases.

I'd recommend making one of these changes:

- response = {"content": answer, "context": context}
+ response = {"content": answer, "context": context}

And update the error response for consistency:

- response = {"is_failure": True, "exception": str(e), "content": None}
+ response = {"is_failure": True, "exception": str(e), "content": None, "context": None}

Or document this API change clearly if it's an intentional feature to support the custom body text for WhatsApp converter buttons.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 566e338 and da5547f.

📒 Files selected for processing (1)
  • kairon/shared/llm/processor.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: Python CI
🔇 Additional comments (1)
kairon/shared/llm/processor.py (1)

119-145:

❓ Verification inconclusive

Verify the impact on API consumers.

The addition of the 'context' field to the response structure modifies the API contract of the predict method. While this change supports the WhatsApp converter button functionality, it's important to ensure all consumers of this API are prepared for this change.


🏁 Script executed:

#!/bin/bash
# Find all places where the predict method is called to verify they can handle the new response structure

echo "Searching for calls to the predict method..."
rg -A 5 "predict\s*\(" --type py | grep -v "def predict"

echo "Searching for places where the response from predict is processed..."
rg -A 5 "response.*predict" --type py

Length of output: 23692


Action: Confirm API consumer compatibility with the new "context" field in predict responses.

After verifying the call sites and tests, we can see that the majority of consumers (including those in unit tests, prompt handling, and message processing) retrieve the response from the predict method by accessing keys like "content" or checking for failure states. These components appear to handle the returned dict without adverse effects from the additional "context" field. However, since this change alters the API contract, please ensure that any external or downstream consumers (including those not covered by our tests) are aware of and prepared for the new field.

  • Files to review:
    • kairon/actions/definitions/prompt.py – Verify that only the necessary parts of the response are used.
    • kairon/chat/agent/message_processor.py – Confirm that the response is processed in a way that accommodates extra fields.
    • Unit tests in tests/unit_test/llm_test.py already validate the handling of the response structure.

Keep a close eye on any external integrations that might depend on the prior response structure.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
tests/integration_test/action_service_test.py (3)

13626-13627: Consider safer context removal and add explanatory comment.

The code correctly removes the 'context' key from the llm_response_log, which aligns with the new structure returned by the LLMProcessor. However, consider:

  1. Using the safer pop('context', None) pattern to prevent potential KeyError if the key doesn't exist
  2. Adding a small comment explaining why context needs to be removed (likely for test assertion compatibility)
if event.get('llm_response_log'):
-    event['llm_response_log'].pop('context')
+    # Remove context to maintain compatibility with test assertions
+    event['llm_response_log'].pop('context', None)

13759-13760: Consider safer context removal and add explanatory comment.

The code correctly removes the 'context' key from the llm_response_log, which aligns with the new structure returned by the LLMProcessor. However, consider:

  1. Using the safer pop('context', None) pattern to prevent potential KeyError if the key doesn't exist
  2. Adding a small comment explaining why context needs to be removed (likely for test assertion compatibility)
if event.get('llm_response_log'):
-    event['llm_response_log'].pop('context')
+    # Remove context to maintain compatibility with test assertions
+    event['llm_response_log'].pop('context', None)

13913-13914: Consider safer context removal and add explanatory comment.

The code correctly removes the 'context' key from the llm_response_log, which aligns with the new structure returned by the LLMProcessor. However, consider:

  1. Using the safer pop('context', None) pattern to prevent potential KeyError if the key doesn't exist
  2. Adding a small comment explaining why context needs to be removed (likely for test assertion compatibility)
if event.get('llm_response_log'):
-    event['llm_response_log'].pop('context')
+    # Remove context to maintain compatibility with test assertions
+    event['llm_response_log'].pop('context', None)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between da5547f and 2a628dd.

📒 Files selected for processing (2)
  • tests/integration_test/action_service_test.py (3 hunks)
  • tests/unit_test/llm_test.py (0 hunks)
💤 Files with no reviewable changes (1)
  • tests/unit_test/llm_test.py
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: Analyze (python)
  • GitHub Check: Python CI
🔇 Additional comments (2)
tests/integration_test/action_service_test.py (2)

13909-13909: Good validation of context presence.

The assertion confirms that the new 'context' field is properly populated in the response, which is crucial for validating the feature's implementation.


13909-13909:

✅ Verification successful

Verify all related tests handle the new context field consistently.

The changes ensure this test can handle the new 'context' field in the LLM response. The assertion confirms the field has content before removing it for comparison.

Also applies to: 13913-13914


🏁 Script executed:

#!/bin/bash
# Check if other test files might need similar updates to handle the new context field

echo "Searching for other test files that might need similar updates..."
rg -l "llm_response_log" --type py --glob "tests/**"

echo "Checking for other instances where context might need to be handled:"
rg "llm_response_log" --type py --glob "tests/**" -A 5 -B 5

Length of output: 6919


Consistent Handling of 'context' Field Verified

The search confirms that the only tests involving the llm_response_log (and its new context field) are in tests/integration_test/action_service_test.py. In this file, the assertion at line 13909 correctly verifies that context is non-empty before it is removed later (lines 13913–13914). No additional test files require similar updates.

@sfahad1414 sfahad1414 self-requested a review March 5, 2025 04:45
@sfahad1414 sfahad1414 merged commit 9b54712 into digiteinfotech:master Mar 5, 2025
7 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants