You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run the ONNX to NNP conversion command without qauntization it works , however if I qauntize the model I get the following opset error
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/cli/cli.py", line 147, in cli_main
return_value = args.func(args)
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/cli/convert.py", line 111, in convert_command
nnabla.utils.converter.convert_files(args, args.files, output)
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/commands.py", line 255, in convert_files
nnp = _import_file(args, ifiles)
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/commands.py", line 41, in _import_file
return OnnxImporter(*ifiles).execute()
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/onnx/importer.py", line 4449, in execute
return self.onnx_model_to_nnp_protobuf()
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/onnx/importer.py", line 4426, in onnx_model_to_nnp_protobuf
raise ValueError(
ValueError: Unsupported opset from domain ai.onnx.m
The text was updated successfully, but these errors were encountered:
Thank you for reporting!
Currently, we have not supported importing quantized onnx to nnp, because it cannot convert some onnx layers.
But, please let us consider which layers are causing this error.
Hi
I have been trying to build a Downsized U-Net model to deploy on the spresense board
I have been using pytorch
To deploy on the spresense board I need the model file to be .nnb format
Thus I use the following code to generate a .nnb model file
Pytorch code to genrate .ONNX file
ONNX code to qauntize model to int8
Convert Opset Version of the ONNX model to 13
CLI to convert ONNX file to NNP file
ISSUE
When I run the ONNX to NNP conversion command without qauntization it works , however if I qauntize the model I get the following opset error
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/cli/cli.py", line 147, in cli_main
return_value = args.func(args)
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/cli/convert.py", line 111, in convert_command
nnabla.utils.converter.convert_files(args, args.files, output)
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/commands.py", line 255, in convert_files
nnp = _import_file(args, ifiles)
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/commands.py", line 41, in _import_file
return OnnxImporter(*ifiles).execute()
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/onnx/importer.py", line 4449, in execute
return self.onnx_model_to_nnp_protobuf()
File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/onnx/importer.py", line 4426, in onnx_model_to_nnp_protobuf
raise ValueError(
ValueError: Unsupported opset from domain ai.onnx.m
The text was updated successfully, but these errors were encountered: