Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AveragePrecision in recsys.evaluation.ranking #29

Open
lisiqi opened this issue Jul 18, 2017 · 1 comment
Open

AveragePrecision in recsys.evaluation.ranking #29

lisiqi opened this issue Jul 18, 2017 · 1 comment

Comments

@lisiqi
Copy link

lisiqi commented Jul 18, 2017

Hi Oscar,
The calculation of AveragePrecision in recsys.evaluation.ranking is not correct. The returned value should be sum(p_at_k)/number of relevant items, rather than sum(p_at_k)/hits.

In your document --> evaluation, the corresponding part also needs to be changed.

from recsys.evaluation.ranking import AveragePrecision

ap = AveragePrecision()

GT = [1,2,3,4,5]
q = [1,3,5]
ap.load(GT, q)
ap.compute() # returns 1.0, should return 0.6

GT = [1,2,3,4,5]
q = [99,3,5]
ap.load(GT, q)
ap.compute() # returns 0.5833335, should return 0.23333

Kind Regards,
Siqi

@ocelma
Copy link
Owner

ocelma commented Aug 25, 2017

Hi Siqi,

Thanks!
I need to look into the code. It's been a while...

I think instead of (line 148):

return sum(p_at_k)/hits

I should do something like:

return sum(p_at_k)/len(self._ground_truth)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants