ONNX runtime quantization is under active development. please use 1.6.0+ to get more quantization support.
This example load an image classification model exported from PyTorch and confirm its accuracy and speed based on ILSVR2012 validation Imagenet dataset. You need to download this dataset yourself.
onnx: 1.9.0 onnxruntime: 1.10.0
Use tf2onnx tool to convert tflite to onnx model.
wget https://github.com/mlcommons/mobile_models/blob/main/v0_7/tflite/mobilenet_edgetpu_224_1.0_float.tflite
python -m tf2onnx.convert --opset 11 --tflite mobilenet_edgetpu_224_1.0_float.tflite --output mobilenet_v3.onnx
Quantize model with QLinearOps:
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--config=mobilenet_v3.yaml \
--output_model=path/to/save
Quantize model with QDQ mode:
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--config=mobilenet_v3_qdq.yaml \
--output_model=path/to/save
bash run_benchmark.sh --input_model=path/to/model \ # model path as *.onnx
--config=mobilenet_v3.yaml \
--mode=performance # or accuracy