You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran your implementation with a slightly different configuration (different dataset) and when using num_nodes: 87, enc_input_dim:1 your implementation has 223169 parameters while the original implementation with the same configuration has 371392. Do you have any ideas what could be the reason?
The text was updated successfully, but these errors were encountered:
Hi, razvanc92
The reason that this inconsistency occurred is that I only implemented the "filter_type=laplacian" case, while the original implementation has considered all three different cases: "laplacian", "random_walk", "dual_random_walk". In fact, in the "laplacian" case, the length of self._support is 1, whereas in the "dual_random_walk" case, the length is 2.
Thank you for pointing this issue out. It actually took me a while to detect the reason. And I'll make an update to address this issue.
Hi, @razvanc92
I have modified the code. Now, when testing with the METR-LA dataset, the model produces exactly the same amount of parameters as the original implementation does.
@xlwang233 Thank you, I'll take a loot asap. Also there are some other things that could be improved/fixed. At the moment you can not run the code on cpu since there are some .cuda(), instead it could be .to(device). Also in the train.py the data loader dataset is hardcoded.
I ran your implementation with a slightly different configuration (different dataset) and when using num_nodes: 87, enc_input_dim:1 your implementation has 223169 parameters while the original implementation with the same configuration has 371392. Do you have any ideas what could be the reason?
The text was updated successfully, but these errors were encountered: