-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLMBasedFaithfulness does not work when using LLMs other than OpenAI #69
Comments
Great catch, thanks @kelvinchanwh! @ellipsis-dev can you fix this and submit a PR. |
The code change should be to the following instead since model is stored in self._llm line 39: |
kelvinchanwh
added a commit
to kelvinchanwh/continuous-eval
that referenced
this issue
Jun 11, 2024
Fix issue relari-ai#69 where LLMs other than OpenAI APIs are not being called
Merged
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In continuous_eval/metrics/generation/text/llm_based.py, line 39, the full model parameters are not being passed to the LLMBasedContextCoverage class. This means that the code defaults to calling OpenAI API even if another LLM is passed as self.model
Changes requied
AS-IS:
context_coverage = LLMBasedContextCoverage(use_few_shot=self.use_few_shot)
TO-BE:
The text was updated successfully, but these errors were encountered: