-
Notifications
You must be signed in to change notification settings - Fork 241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
trt.create_inference_graph step in detection.ipynb stuck for long time #77
Comments
I have a similar issue. I try to run detection.ipynb on Jetson Nano (jetpack 4.3, python 3.6, tensorflow 1.15) but when it reaches trt.create_inference_graph() it stucks for several minutes and the kernel restarts. Memory usage is 3.3/3.9GB and swap almost empty. Last terminal outputs: 2020-06-05 23:51:45.473972: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:633] Number of TensorRT candidate segments: 2 Appreciate any help. |
hello,have you ever solved this problem? I encounter same |
hello,have you ever solved this problem? I encounter same |
The Kernel gets restarted in my case too. My current settings are Guess a lot of people are facing this issue when trying to optimize the frozen graph using TensorRT. Repository owners, please fix this bug. |
Here is the solution to this issue. @dkatsios @roarjn @evil-potato Add one new parameter to this below code, i.e
When I closely looked in the jupyter terminal, the error pointed to something like this. |
Hi,
In the detection.ipynb I have set the
score_threshold=0.3
as recommended. The above cells run as expected however, thetrt.create_inference_graph
cell does not run. When i run this cell I see a Python process utilize 100% cpu ontop
command. Then this cpu utilization goes to 0 but the cell still keeps running. I have kept the cell running for >30mins.https://github.com/NVIDIA-AI-IOT/tf_trt_models/blob/master/examples/detection/detection.ipynb
Appreciate any help.
The text was updated successfully, but these errors were encountered: