You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am currently working on the paper that you published with this repository.
By looking at your code, I realized that you are treating the search methods Win and Dynp differently than Binseg. Indeed, for Win and Dynp you are simply scaling the scores and aggregating them. On the other hand, for Binseg you are first taking the opposite of the scores before scaling and aggregating them, and then you are taking the opposite of the output again. This can be seen here in binsegensembling.py:
gain, bkp=max(np.array([(-1)*selected_aggregation(self.ensembling)(np.array(scores)*(-1)), np.array(gain_list)[:,1]]).T, key=lambdax: x[0])
This is understandable since this is a gain and the score for Dynp is a cost. By taking the opposite you are dealing with the same kind of scores. My problem is that for Win, the score is also a gain but you are not doing the same "trick" than for Binseg. Indeed, here, in windowensembling.py the opposite of the score is not taken:
I would personnaly suggest to do the same trick for Win than for Binseg. By not doing the "trick", it is as if you would have taken other aggregation functions. For example, for the min aggregation function, it is as if you would have taken the max aggregation function.
I was curious to see what we could get with this idea. This is why I implemented the "trick" for the search method Binseg also. I tried the max aggregation function for the three search methods. The results are available in the forked repository.
ps: I had a lot of fun working on your paper. Thanks a lot for your work and your interesting ideas :)
The text was updated successfully, but these errors were encountered:
Hi, I am currently working on the paper that you published with this repository.
By looking at your code, I realized that you are treating the search methods Win and Dynp differently than Binseg. Indeed, for Win and Dynp you are simply scaling the scores and aggregating them. On the other hand, for Binseg you are first taking the opposite of the scores before scaling and aggregating them, and then you are taking the opposite of the output again. This can be seen here in
binsegensembling.py
:This is understandable since this is a gain and the score for Dynp is a cost. By taking the opposite you are dealing with the same kind of scores. My problem is that for Win, the score is also a gain but you are not doing the same "trick" than for Binseg. Indeed, here, in
windowensembling.py
the opposite of the score is not taken:I would personnaly suggest to do the same trick for Win than for Binseg. By not doing the "trick", it is as if you would have taken other aggregation functions. For example, for the min aggregation function, it is as if you would have taken the max aggregation function.
I was curious to see what we could get with this idea. This is why I implemented the "trick" for the search method Binseg also. I tried the max aggregation function for the three search methods. The results are available in the forked repository.
ps: I had a lot of fun working on your paper. Thanks a lot for your work and your interesting ideas :)
The text was updated successfully, but these errors were encountered: