Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A more general solution to model answer extraction instead of output_regex #358

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

sadra-barikbin
Copy link
Contributor

@sadra-barikbin sadra-barikbin commented Oct 11, 2024

Hi there!

Here is the PR for model answer extraction. Currently, we have output_regex and this PR attempts to replace it with a more general solution.

By the way, our current output_regex seems to be broken as it's not fed into apply_generative_metric in Pipeline._compute_metrics

Fixes #360

@@ -504,7 +507,19 @@ def get_metric_method_from_category(self, metric_category):
if not self.has_metric_category[metric_category]:
raise ValueError(f"Requested a metric category {metric_category} absent from the task list.")

return LightevalTask._get_metric_method_from_category(metric_category)
metric_method = LightevalTask._get_metric_method_from_category(metric_category)
# Bad hack. I had no other way.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest considering task as the argument of apply_*_metrics functions as a workaround.

@clefourrier
Copy link
Member

Hi Sadra,
Sorry if I missed it, but did you discuss this PR in an issue before adding it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FT] More general approach than output_regex to model answer extraction
3 participants