[Fix] batch_gpt4.py parsing issue with openai's HttpxBinaryResponseContent #161
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In the existing code, batch_results is being processed with json.loads().
However, batch_results is of type HttpxBinaryResponseContent from openai, and cannot be directly converted.
We should use either batch_results.content or batch_results.text.
But they are batched, so they are in jsonl format.
Using json.loads() directly on them results in the error "json.decoder.JSONDecodeError: Extra data: line 2 column 1."
Therefore, it is preferable to split the content by \n and then process each line.
This commit addresses this issue.
If there are better implementations, please let me know.
Before you open a pull-request, please check if a similar issue already exists or has been closed before.
When you open a pull-request, please be sure to include the following
Thank you for your contributions!