Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: 'ValueError' object is not callable,when CUDAExecutionProvider mode #3

Open
libhot opened this issue Aug 22, 2024 · 2 comments

Comments

@libhot
Copy link

libhot commented Aug 22, 2024

run successful when on CPUExecutionProvider mode.
but run error when on CUDAExecutionProvider mode .

error log :

python app.py
/opt/miniconda3/envs/ReHiFace/lib/python3.10/site-packages/pandas/core/computation/expressions.py:21: UserWarning: Pandas requires version '2.8.4' or newer of 'numexpr' (version '2.8.3' currently installed).
from pandas.core.computation.check import NUMEXPR_INSTALLED
2024-08-22 10:06:06,271 - cv2box - INFO - Use default log level: info, or you can set env 'CV_LOG_LEVEL' to debug/info/warning/error/critical
Running on local URL: http://0.0.0.0:5000

To create a public link, set share=True in launch().
warning: det_size is already set in scrfd model, ignore
pfpld ['CUDAExecutionProvider', 'CPUExecutionProvider']
Model warm up done !
[[gpu] onnx 10 times fps: 42.141950004018966]
[[gpu] onnx 10 times fps: 42.45289932084333]
2024-08-22 10:06:25.617085482 [W:onnxruntime:Default, cuda_execution_provider.cc:2310 ConvTransposeNeedFallbackToCPU] Dropping the ConvTranspose node: conv2d_transpose to CPU because it requires asymmetric padding which the CUDA EP currently does not support
2024-08-22 10:06:25.617124996 [W:onnxruntime:Default, cuda_execution_provider.cc:2411 GetCapability] CUDA kernel not supported. Fallback to CPU execution provider for Op type: ConvTranspose node name: conv2d_transpose
2024-08-22 10:06:25.617650844 [W:onnxruntime:Default, cuda_execution_provider.cc:2310 ConvTransposeNeedFallbackToCPU] Dropping the ConvTranspose node: conv2d_transpose_1 to CPU because it requires asymmetric padding which the CUDA EP currently does not support
2024-08-22 10:06:25.617671885 [W:onnxruntime:Default, cuda_execution_provider.cc:2411 GetCapability] CUDA kernel not supported. Fallback to CPU execution provider for Op type: ConvTranspose node name: conv2d_transpose_1
2024-08-22 10:06:25.618188729 [W:onnxruntime:Default, cuda_execution_provider.cc:2310 ConvTransposeNeedFallbackToCPU] Dropping the ConvTranspose node: conv2d_transpose_2 to CPU because it requires asymmetric padding which the CUDA EP currently does not support
2024-08-22 10:06:25.618209622 [W:onnxruntime:Default, cuda_execution_provider.cc:2411 GetCapability] CUDA kernel not supported. Fallback to CPU execution provider for Op type: ConvTranspose node name: conv2d_transpose_2
2024-08-22 10:06:25.618725877 [W:onnxruntime:Default, cuda_execution_provider.cc:2310 ConvTransposeNeedFallbackToCPU] Dropping the ConvTranspose node: conv2d_transpose_3 to CPU because it requires asymmetric padding which the CUDA EP currently does not support
2024-08-22 10:06:25.618746728 [W:onnxruntime:Default, cuda_execution_provider.cc:2411 GetCapability] CUDA kernel not supported. Fallback to CPU execution provider for Op type: ConvTranspose node name: conv2d_transpose_3
2024-08-22 10:06:25.619162584 [W:onnxruntime:Default, cuda_execution_provider.cc:2310 ConvTransposeNeedFallbackToCPU] Dropping the ConvTranspose node: conv2d_transpose_4 to CPU because it requires asymmetric padding which the CUDA EP currently does not support
2024-08-22 10:06:25.619183908 [W:onnxruntime:Default, cuda_execution_provider.cc:2411 GetCapability] CUDA kernel not supported. Fallback to CPU execution provider for Op type: ConvTranspose node name: conv2d_transpose_4
2024-08-22 10:06:25.619586327 [W:onnxruntime:Default, cuda_execution_provider.cc:2310 ConvTransposeNeedFallbackToCPU] Dropping the ConvTranspose node: conv2d_transpose_5 to CPU because it requires asymmetric padding which the CUDA EP currently does not support
2024-08-22 10:06:25.619602145 [W:onnxruntime:Default, cuda_execution_provider.cc:2411 GetCapability] CUDA kernel not supported. Fallback to CPU execution provider for Op type: ConvTranspose node name: conv2d_transpose_5
xseg ['CUDAExecutionProvider', 'CPUExecutionProvider']
Model warm up done !
[[gpu] onnx 10 times fps: 3.398418237666232]
[[gpu] onnx 10 times fps: 3.8644535658839736]
9O ['CUDAExecutionProvider', 'CPUExecutionProvider']
Model warm up done !
[[gpu] onnx 10 times fps: 4.281553497165233]
[[gpu] onnx 10 times fps: 5.184034725818552]
warning: det_size is already set in scrfd model, ignore
Traceback (most recent call last):
File "/share/ai/ReHiFace-S-main/face_detect/face_align_68.py", line 200, in init
self.ort_session = ort.InferenceSession(onnx_path)
File "/opt/miniconda3/envs/ReHiFace/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 360, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/opt/miniconda3/envs/ReHiFace/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 388, in _create_inference_session
raise ValueError(
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/miniconda3/envs/ReHiFace/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "/opt/miniconda3/envs/ReHiFace/lib/python3.10/site-packages/gradio/route_utils.py", line 288, in call_process_api
output = await app.get_blocks().process_api(
File "/opt/miniconda3/envs/ReHiFace/lib/python3.10/site-packages/gradio/blocks.py", line 1931, in process_api
result = await self.call_function(
File "/opt/miniconda3/envs/ReHiFace/lib/python3.10/site-packages/gradio/blocks.py", line 1516, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/opt/miniconda3/envs/ReHiFace/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/opt/miniconda3/envs/ReHiFace/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/opt/miniconda3/envs/ReHiFace/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/opt/miniconda3/envs/ReHiFace/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/share/ai/ReHiFace-S-main/app.py", line 106, in process_image_video
initialize_processor()
File "/share/ai/ReHiFace-S-main/app.py", line 24, in initialize_processor
processor = FaceSwapProcessor(crop_size=opt.input_size)
File "/share/ai/ReHiFace-S-main/app.py", line 32, in init
self.face_alignment = face_alignment_landmark(lm_type=68)
File "/share/ai/ReHiFace-S-main/face_detect/face_align_68.py", line 226, in init
self.fan = pfpld(cpu=False)
File "/share/ai/ReHiFace-S-main/face_detect/face_align_68.py", line 202, in init
raise e("load onnx failed")
TypeError: 'ValueError' object is not callable

@libhot
Copy link
Author

libhot commented Aug 22, 2024

resolved when use CPUExecutionProvider forced.
code:
ort.InferenceSession(onnx_path, providers=['CPUExecutionProvider'])

@QUTGXX
Copy link

QUTGXX commented Aug 22, 2024

resolved when use CPUExecutionProvider forced. code: ort.InferenceSession(onnx_path, providers=['CPUExecutionProvider'])

What is your onnxruntime-gpu and CUDA version? Make sure they match or are the same as we mentioned. @libhot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants