-
Notifications
You must be signed in to change notification settings - Fork 504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nanosam: can not run example 1 , error for corrupted size vs. prev_size #749
Comments
Hi @usernameXYS , it looks like maybe it is built against a different version of TensorRT - which version of JetPack-L4T are you running? Can you try running a container of the same tag, or rebuilding the container / re-generating those TRT engines for nanosam? |
Hi @dusty-nv , Dustynv/nanosam: r36.2.0, Docker container environment: In addition, I tried Dustynv/nanosam: r35.2.1 version, which had some issues with the GLIBC version when running. I updated it to run Example 1, but there were also some other problems. I couldn't install nvarguscamerasrc, and there were conflicts with some libraries that prevented me from obtaining camera data. Cannot run example 4. and now i try to build nanosam: r36.4.0. Still in testing...... |
Hi @dusty-nv , |
Ok great, glad you got it working! I think it was because 35.2 image was from JetPack 5, and 36.4 is JetPack 6.
…________________________________
From: yisen.xu ***@***.***>
Sent: Thursday, December 19, 2024 1:13:41 AM
To: dusty-nv/jetson-containers ***@***.***>
Cc: Dustin Franklin ***@***.***>; Mention ***@***.***>
Subject: Re: [dusty-nv/jetson-containers] nanosam: can not run example 1 , error for corrupted size vs. prev_size (Issue #749)
Hi @dusty-nv<https://github.com/dusty-nv> ,
I can use locally built nanosam, jetson build nanosam, The version is 36.4.0, and the issue mentioned above does not occur anymore. thank!
The 36.4 version environment is as follows, and the above issue may be due to differences between the local machine and Docker environment.
nanosam36.4
TensorRT:10.4
CUDA: 12.6.77
—
Reply to this email directly, view it on GitHub<#749 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADVEGK273MVIDOGM6ZJOSKT2GJPZLAVCNFSM6AAAAABTYJWYJWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNJSHA3TIOBZHE>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Hi,
I directly pulled down the image using the jetson containers run Dustynv/nanosam: r36.2.0 command, and when I ran it directly, the following problem occurred. I haven't made any changes. Has anyone encountered this problem? Thanks
/opt/nanosam# python3 examples/basic_usage.py --image_encoder="data/resnet18_image_encoder.engine" --mask_decoder="data/mobile_sam_mask_decoder.engine"
/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py:123: FutureWarning: Using
TRANSFORMERS_CACHE
is deprecated and will be removed in v5 of Transformers. UseHF_HOME
instead.warnings.warn(
[12/17/2024-20:20:29] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[12/17/2024-20:20:29] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
/opt/nanosam/nanosam/utils/predictor.py:84: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /tmp/pytorch/torch/csrc/utils/tensor_numpy.cpp:206.)
image_torch_resized = torch.from_numpy(image_np_resized).permute(2, 0, 1)
/opt/nanosam/nanosam/utils/predictor.py:103: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at /tmp/pytorch/torch/csrc/utils/tensor_new.cpp:261.)
image_point_coords = torch.tensor([points]).float().cuda()
corrupted size vs. prev_size
Aborted (core dumped)
The text was updated successfully, but these errors were encountered: