CUDA Onnx models #9727
-
Hi - using CUDA onnx models with the onnx connector results in an exception. Here's the message: CUDA execution provider is not enabled in this build.
|
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 3 replies
-
This error Message reads like you have an model optimized for GPU but using the CPU Package. Running Onnx with GPU is not so straight forward locally.
I've tested this on my machine and this works for python. |
Beta Was this translation helpful? Give feedback.
-
Looks like this issue/pull request should resolve the problem: How about function calling? Is that something that could work with Onnx models? |
Beta Was this translation helpful? Give feedback.
-
@nmoeller Currently the only version we have our C# connectors is the CPU enabled one. Attempting to inject a
We are currently working on this issue. |
Beta Was this translation helpful? Give feedback.
-
@deepinderdeol GenAI library as of now doesn't support function calling. We currently have an open discussion in their repository asking for this feature for integration. |
Beta Was this translation helpful? Give feedback.
@deepinderdeol GenAI library as of now doesn't support function calling.
We currently have an open discussion in their repository asking for this feature for integration.