You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, is there any particular reasoning or motivation behind choosing BLEU as your evaluation metric? Was there any particular reason to not use exact match or leverage the fact that you'll be returning function signatures?
The text was updated successfully, but these errors were encountered:
Hi there, thanks for your interest in the project. We use BLEU score with different thresholds to determine the accuracy of the retrieved needle function. (As you can observe in our leaderboard here: https://evalplus.github.io/repoqa.html). When you have a score of 1.0, that is an exact match between the retrieved needle function and the target needle function. Using this approach, the evaluation is more comprehensive than simply checking for an exact match or only the function signature.
Hey, is there any particular reasoning or motivation behind choosing BLEU as your evaluation metric? Was there any particular reason to not use exact match or leverage the fact that you'll be returning function signatures?
The text was updated successfully, but these errors were encountered: