Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nanosam: can not run example 1 , error for corrupted size vs. prev_size #749

Open
usernameXYS opened this issue Dec 17, 2024 · 4 comments
Open

Comments

@usernameXYS
Copy link

Hi,
I directly pulled down the image using the jetson containers run Dustynv/nanosam: r36.2.0 command, and when I ran it directly, the following problem occurred. I haven't made any changes. Has anyone encountered this problem? Thanks

/opt/nanosam# python3 examples/basic_usage.py --image_encoder="data/resnet18_image_encoder.engine" --mask_decoder="data/mobile_sam_mask_decoder.engine"
/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py:123: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead.
warnings.warn(
[12/17/2024-20:20:29] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[12/17/2024-20:20:29] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
/opt/nanosam/nanosam/utils/predictor.py:84: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /tmp/pytorch/torch/csrc/utils/tensor_numpy.cpp:206.)
image_torch_resized = torch.from_numpy(image_np_resized).permute(2, 0, 1)
/opt/nanosam/nanosam/utils/predictor.py:103: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at /tmp/pytorch/torch/csrc/utils/tensor_new.cpp:261.)
image_point_coords = torch.tensor([points]).float().cuda()
corrupted size vs. prev_size
Aborted (core dumped)

@dusty-nv
Copy link
Owner

Hi @usernameXYS , it looks like maybe it is built against a different version of TensorRT - which version of JetPack-L4T are you running? Can you try running a container of the same tag, or rebuilding the container / re-generating those TRT engines for nanosam?

@usernameXYS
Copy link
Author

Hi @dusty-nv ,
Thank you for your reply,Below is my environment.
Host environment:
L4T_VERSION=36.4.0
JETPACK_VERSION=6.1
CUDA_VERSION=12.6
PYTHON_VERSION=3.10
LSB_RELEASE=22.04 (jammy)

Dustynv/nanosam: r36.2.0, Docker container environment:
TensorRT: 8.6.2.3-1+cuda12.2

In addition, I tried Dustynv/nanosam: r35.2.1 version, which had some issues with the GLIBC version when running. I updated it to run Example 1, but there were also some other problems. I couldn't install nvarguscamerasrc, and there were conflicts with some libraries that prevented me from obtaining camera data. Cannot run example 4.

and now i try to build nanosam: r36.4.0. Still in testing......

@usernameXYS
Copy link
Author

Hi @dusty-nv ,
I can use locally built nanosam, jetson build nanosam, The version is 36.4.0, and the issue mentioned above does not occur anymore. thank!
The 36.4 version environment is as follows, and the above issue may be due to differences between the local machine and Docker environment.
nanosam36.4
TensorRT:10.4
CUDA: 12.6.77

@dusty-nv
Copy link
Owner

dusty-nv commented Dec 19, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants