-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track token usage of iris requests #165
Changes from 1 commit
fc44738
26e3873
aa50faf
9905460
e241d45
4502e30
3b81a30
74b1239
6bcb002
4324180
c9e89be
4c92900
6bd4b33
c79837d
4d61c85
3253c46
9fe9e0a
13c5db1
dd504fc
043264a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
@@ -62,23 +62,26 @@ def convert_to_open_ai_messages( | |||||||||
return openai_messages | ||||||||||
|
||||||||||
|
||||||||||
def convert_to_iris_message(message: ChatCompletionMessage, usage: Optional[CompletionUsage], model: str) -> PyrisMessage: | ||||||||||
def convert_to_iris_message( | ||||||||||
message: ChatCompletionMessage, usage: Optional[CompletionUsage], model: str | ||||||||||
) -> PyrisMessage: | ||||||||||
Comment on lines
+72
to
+74
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Codebase verification Calls to
🔗 Analysis chainEnsure all calls to Since the function You can run the following script to identify any calls that may need to be updated: 🏁 Scripts executedThe following scripts were executed for the analysis: Script: #!/bin/bash
# Description: Find all usages of `convert_to_iris_message` and check if they match the new signature.
# Expected: All calls should include three parameters: message, usage, and model.
rg --type python $'convert_to_iris_message\s*\(' -A 1
Length of output: 85 Script: #!/bin/bash
# Description: Find all usages of `convert_to_iris_message` in Python files and check if they match the new signature.
# Expected: All calls should include three parameters: message, usage, and model.
rg --glob "*.py" 'convert_to_iris_message\s*\(' -A 1
Length of output: 723 |
||||||||||
""" | ||||||||||
Convert a ChatCompletionMessage to a PyrisMessage | ||||||||||
""" | ||||||||||
num_input_tokens = getattr(usage, 'prompt_tokens', -1) | ||||||||||
num_output_tokens = getattr(usage, 'completion_tokens', -1) | ||||||||||
num_input_tokens = getattr(usage, "prompt_tokens", -1) | ||||||||||
alexjoham marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||
num_output_tokens = getattr(usage, "completion_tokens", -1) | ||||||||||
alexjoham marked this conversation as resolved.
Show resolved
Hide resolved
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Consider using 0 as the default value for token counts The current implementation uses -1 as the default value for Suggested change: - num_input_tokens = getattr(usage, "prompt_tokens", -1)
- num_output_tokens = getattr(usage, "completion_tokens", -1)
+ num_input_tokens = getattr(usage, "prompt_tokens", 0)
+ num_output_tokens = getattr(usage, "completion_tokens", 0) This change would make the default values more intuitive and consistent with other parts of the codebase. 📝 Committable suggestion
Suggested change
|
||||||||||
|
||||||||||
message = PyrisMessage( | ||||||||||
sender=map_str_to_role(message.role), | ||||||||||
contents=[TextMessageContentDTO(textContent=message.content)], | ||||||||||
send_at=datetime.now(), | ||||||||||
num_input_tokens=num_input_tokens, | ||||||||||
num_output_tokens=num_output_tokens, | ||||||||||
model_info=model | ||||||||||
model_info=model, | ||||||||||
) | ||||||||||
return message | ||||||||||
|
||||||||||
|
||||||||||
class OpenAIChatModel(ChatModel): | ||||||||||
model: str | ||||||||||
api_key: str | ||||||||||
|
@@ -110,7 +113,9 @@ def chat( | |||||||||
temperature=arguments.temperature, | ||||||||||
max_tokens=arguments.max_tokens, | ||||||||||
) | ||||||||||
return convert_to_iris_message(response.choices[0].message, response.usage, response.model) | ||||||||||
return convert_to_iris_message( | ||||||||||
response.choices[0].message, response.usage, response.model | ||||||||||
) | ||||||||||
except Exception as e: | ||||||||||
wait_time = initial_delay * (backoff_factor**attempt) | ||||||||||
logging.warning(f"Exception on attempt {attempt + 1}: {e}") | ||||||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for missing keys in 'response' dictionary
When accessing
response["message"]
,response["prompt_eval_count"]
,response["eval_count"]
, andresponse["model"]
, there's a risk of aKeyError
if any of these keys are missing. It's safer to use theget
method with default values or implement error handling to manage potential missing data.Apply this diff to handle missing keys gracefully:
📝 Committable suggestion