We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
你好,我用 Tesla P40 单卡来跑咱们的数据集,用到的参数都是论文里的设置 model_para = { 'item_size': len(items), 'dilated_channels': 256, 'dilations': [1, 2, 4, 8, 1, 2, 4, 8, 1, 2, 4, 8, 1, 2, 4, 8], 'kernel_size': 3, 'learning_rate':0.001, 'batch_size':32, 'iterations':400, 'is_negsample':True } 发现训练速度非常慢,要用 416分钟才能跑完一轮,默认的400轮跑完不知要猴年马月了。我用的是 coldrec2_pre.csv 这个数据集。请问这个训练速度很慢的问题有遇到过吗? -------------------------------------------------------train1 LOSS: 5.77672100067 ITER: 0 BATCH_NO: 169 STEP:170 total_batches:23006 TIME FOR BATCH 1.08627700806 TIME FOR ITER (mins) 416.514814123
model_para = { 'item_size': len(items), 'dilated_channels': 256, 'dilations': [1, 2, 4, 8, 1, 2, 4, 8, 1, 2, 4, 8, 1, 2, 4, 8], 'kernel_size': 3, 'learning_rate':0.001, 'batch_size':32, 'iterations':400, 'is_negsample':True }
The text was updated successfully, but these errors were encountered:
[1, 2, 4, 8, 1, 2, 4, 8, 1, 2, 4, 8, 1, 2, 4, 8] 是32层,1 4 means 1 2 4 8. 一般2 iterations 就收敛了, 不用400的 (请阅读readme里面哈,400不是默认设置). 你用NextitNet_TF_Pretrain_topk.py .py试一下。另外 用了很大的embedding和很高的层数,慢一些属于正常,如果改成把喂数据地方改一下,会快很多的,网络比transformer快很多的
Sorry, something went wrong.
No branches or pull requests
你好,我用 Tesla P40 单卡来跑咱们的数据集,用到的参数都是论文里的设置
model_para = { 'item_size': len(items), 'dilated_channels': 256, 'dilations': [1, 2, 4, 8, 1, 2, 4, 8, 1, 2, 4, 8, 1, 2, 4, 8], 'kernel_size': 3, 'learning_rate':0.001, 'batch_size':32, 'iterations':400, 'is_negsample':True }
发现训练速度非常慢,要用 416分钟才能跑完一轮,默认的400轮跑完不知要猴年马月了。我用的是 coldrec2_pre.csv 这个数据集。请问这个训练速度很慢的问题有遇到过吗?
-------------------------------------------------------train1
LOSS: 5.77672100067 ITER: 0 BATCH_NO: 169 STEP:170 total_batches:23006
TIME FOR BATCH 1.08627700806
TIME FOR ITER (mins) 416.514814123
The text was updated successfully, but these errors were encountered: