Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

code bug #5

Open
SUIBIANDA opened this issue May 9, 2022 · 1 comment
Open

code bug #5

SUIBIANDA opened this issue May 9, 2022 · 1 comment

Comments

@SUIBIANDA
Copy link

Hello, I have read your paper before and have done some research on the basis of your paper. When using your code directly, the default settings of some parameters are not the optimal parameters introduced in your paper. After adjusting the parameters, I still can't achieve the effect introduced in your paper (of course, the results of different machines are different). But there is a bug in the paper. In re_tamm_main.py, line 127, the "model.train()" ,should it be placed in the for loop of the next line? Because after training an epoch, you will using "evaluate" function to evaluate the result. But in the "evaluate" function you set the model to eval(line 199). This will cause the subsequent training to be in the eval state. I fixed it and got the best result introduced in your paper.

@yuanheTian
Copy link
Contributor

Thanks for pointing it out!

The code is fixed accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants