You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have read your paper before and have done some research on the basis of your paper. When using your code directly, the default settings of some parameters are not the optimal parameters introduced in your paper. After adjusting the parameters, I still can't achieve the effect introduced in your paper (of course, the results of different machines are different). But there is a bug in the paper. In re_tamm_main.py, line 127, the "model.train()" ,should it be placed in the for loop of the next line? Because after training an epoch, you will using "evaluate" function to evaluate the result. But in the "evaluate" function you set the model to eval(line 199). This will cause the subsequent training to be in the eval state. I fixed it and got the best result introduced in your paper.
The text was updated successfully, but these errors were encountered:
Hello, I have read your paper before and have done some research on the basis of your paper. When using your code directly, the default settings of some parameters are not the optimal parameters introduced in your paper. After adjusting the parameters, I still can't achieve the effect introduced in your paper (of course, the results of different machines are different). But there is a bug in the paper. In re_tamm_main.py, line 127, the "model.train()" ,should it be placed in the for loop of the next line? Because after training an epoch, you will using "evaluate" function to evaluate the result. But in the "evaluate" function you set the model to eval(line 199). This will cause the subsequent training to be in the eval state. I fixed it and got the best result introduced in your paper.
The text was updated successfully, but these errors were encountered: