Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: forcing response format to be JSON valid #7692

Merged
merged 5 commits into from
May 14, 2024

Conversation

davidsbatista
Copy link
Contributor

Related Issues

How did you test it?

  • Run an evaluation over a dataset for 50 queries/questions

@github-actions github-actions bot added the type:documentation Improvements on the docs label May 13, 2024
@coveralls
Copy link
Collaborator

coveralls commented May 13, 2024

Pull Request Test Coverage Report for Build 9077381133

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 90.408%

Totals Coverage Status
Change from base Build 9074873557: 0.0%
Covered Lines: 6532
Relevant Lines: 7225

💛 - Coveralls

@davidsbatista davidsbatista marked this pull request as ready for review May 13, 2024 16:58
@davidsbatista davidsbatista requested review from a team as code owners May 13, 2024 16:58
@davidsbatista davidsbatista requested review from dfokina and shadeMe and removed request for a team May 13, 2024 16:58
Copy link
Contributor

@shadeMe shadeMe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we already do some basic checks to validate the output, so LGTM! Just one minor change.

haystack/components/evaluators/llm_evaluator.py Outdated Show resolved Hide resolved
@davidsbatista davidsbatista enabled auto-merge (squash) May 14, 2024 10:08
@davidsbatista davidsbatista merged commit 75cf35c into main May 14, 2024
25 checks passed
@davidsbatista davidsbatista deleted the OpenAI-LLM-based-eval-force-valid-JSON branch May 14, 2024 10:22
davidsbatista added a commit that referenced this pull request May 16, 2024
* forcing response format to be JSON valid

* adding release notes

* cleaning up

* Update haystack/components/evaluators/llm_evaluator.py

Co-authored-by: Madeesh Kannan <[email protected]>

---------

Co-authored-by: Madeesh Kannan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:documentation Improvements on the docs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

LLM-based evaluators not always returning a valid JSON
3 participants