You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I was recently researching this work and I am very interested in it. However, I noticed that in the code clone detection experiments, the paper mentioned that except for GrapCodeBert's experiments in POJ-104, you directly taken the results from CodeXGLUE
However, as we know from microsoft/CodeXGLUE#63, the authors of CodeXGLUE found a problem with the code in which they originally calculated the results, and corrected it in their latest paper. But the results quoted in ContraBert are the ones before the update
So I would like to know:
Did you use the correct calculation method to get the correct results when you experimented with GraphCodeBert?
I would like to know if you processed the annotated information in the dataset and whether the metrics you got were Eval MAP or Test MAP, because I got the following results while the report MAP is 90.46
Eval MAP
Test MAP
comment
0.8451
0.8795
with comment
0.8463
0.8926
without comment
I am very much looking forward to your answers!
The text was updated successfully, but these errors were encountered:
Hi,
I was recently researching this work and I am very interested in it. However, I noticed that in the code clone detection experiments, the paper mentioned that except for GrapCodeBert's experiments in POJ-104, you directly taken the results from CodeXGLUE
However, as we know from microsoft/CodeXGLUE#63, the authors of CodeXGLUE found a problem with the code in which they originally calculated the results, and corrected it in their latest paper. But the results quoted in ContraBert are the ones before the update
So I would like to know:
I am very much looking forward to your answers!
The text was updated successfully, but these errors were encountered: