-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some problem while running on GPU #17
Comments
Please check your |
The result of $ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130 And excuting 10.0.130 |
How about setting a larger timeout? Just try to add |
This method does not seem to work... |
Then I'd suggest that you uncomment the two |
It outputs these messages: Optimize yolo convolution layer 9 shape (1, 512, 28, 28, 512, 512, 1, 1, 1, 1, 0, 1, 1)
graph space size 2
op 0 space size: 25344000
[Warning] Directory lib is not empty, but reusing it
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
warm up [1599727687.361282] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
op build fail:module 'tvm.tir' has no attribute 'ir_pass'
...... |
I see. TVM is under fast development and the API keeps changing. To use FlexTensor, you can try TVM (commit 89da63e228eae2b0b4fe39770031a042858c52a7). |
Thanks, I will try it! |
A follow-up to this issue. I got the following errors when running the same example. I am using TVM v0.7 (not exactly the commit you recommended). What would be the reason to have those null error messages? $ python optimize_conv2d.py --shapes yolo --from 8 --to 9 --parallel 16 --target cuda
Optimize yolo convolution layer 9 shape (1, 512, 28, 28, 512, 512, 1, 1, 1, 1, 0, 1, 1)
graph space size 2
op 0 space size: 25344000
[Warning] Directory lib is not empty, but reusing it
op build fail:
op build fail:
op build fail:
op build fail:
op build fail:
op build fail:
op build fail:
op build fail:
op build fail:
op build fail:
op build fail:
op build fail: op build fail:
op build fail:
op build fail:
op build fail:
op build fail:
op build fail:
op build fail:
op build fail:
warm up [1600224234.227920] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf
inf inf ] |
Did you check your |
I want to test the performance of C9 of yolo after FlexTensor's optimization, but there seems to be some problems when running
optimize_conv2d.py
on GPUI have seen a previous issue and the current code uses 'spawn' when using multiprocessing.
It seems that it will not stop running because it can't find a suitable schedule.
The text was updated successfully, but these errors were encountered: