Skip to content

Latest commit

 

History

History
125 lines (86 loc) · 3.49 KB

convert.md

File metadata and controls

125 lines (86 loc) · 3.49 KB

Convert a Model to Optimized Formats for Production Purpose

The module focuses on automatically converting research models to serialized and optimized models from Python code.

We currently support 2 research frameworks:

  • TensorFlow
  • PyTorch

And MLModelCI supports the following conversions:

  • XGBoost, LightGBM, Sci-kit Learn -> PyTorch
  • PyTorch -> TorchScript
  • XGBoost, LightGBM, Sci-kit Learn, PyTorch -> ONNX
  • Tensorflow -> Tensorflow-Serving format (TensorFlow-SavedModel)
  • Tensorflow -> TensorRT format
  • ONNX -> TensorRT format

Upon receiving your model registration, MLModelCI will help you to convert your model to as many as possible optimized formats. You can also use the following APIs to convert models manually.

These APIs will save the converted model in the given saved_path and return success or failure status of the conversion. We restrict to use the standard saved_path generated by modelci.hub.utils.generate_path(...) or path according to the rules (See Tricks with Model Saved Path). This path format can make your life easier.

1. PyTorch Conversion

from modelci.hub.converter import PyTorchConverter

xgboost_model = ...
xgboost_model_inputs = ...
torch_xgboost = PyTorchConverter.from_xgboost(xgboost_model, inputs=xgboost_model_inputs)

lgbm_model = ...
torch_lgbm = PyTorchConverter.from_lightgbm(lgbm_model)

sklearn_model = ...
torch_sklearn = PyTorchConverter.from_sklearn(sklearn_model)

onnx_model = ...
torch_onnx = PyTorchConverter.from_onnx(onnx_model)

2. TorchScript Conversion

from modelci.hub.converter import TorchScriptConverter

torch_module = ...
saved_path = ...

TorchScriptConverter.from_torch_module(torch_module, saved_path)

3. TensorFlow Serving Conversion

from modelci.hub.converter import TFSConverter

tf_module = ...
saved_path = ...

TFSConverter.from_tf_model(tf_module, saved_path)

4. ONNX Conversion

from modelci.hub.converter import ONNXConverter

saved_path = ...

torch_module = ...
torch_inputs = ...
ONNXConverter.from_torch_module(torch_module, inputs=torch_inputs, save_path=saved_path)

xgboost_model = ...
xgboost_inputs = ...
ONNXConverter.from_xgboost(xgboost_model, inputs=xgboost_inputs, save_path=saved_path)

lgbm_model = ...
lgbm_inputs = ...
ONNXConverter.from_lightgbm(lgbm_model, inputs=lgbm_inputs, save_path=saved_path)

sklearn_model = ...
sklearn_inputs = ...
ONNXConverter.from_sklearn(sklearn_model, inputs=sklearn_inputs, save_path=saved_path)

5. TRT Conversion

Different from the above converters, TRT accepts both TensorFlow SavedModel and ONNX formats for further optimization. In practice, if you use this pipeline: PyTorch -> ONNX -> TRT, you can get the best performance. However, in most cases, PyTorch model can not be further converted.

From TF SavedModel to TF-TRT

from modelci.hub.converter import TRTConverter
from modelci.types.bo import IOShape

tf_path = ...
trt_path = ...
inputs = [IOShape([...], dtype=..., format=...), ...]
outputs = [IOShape([...], dtype=..., format=...), ...]

TRTConverter.from_saved_model(tf_path, trt_path, inputs=inputs, outputs=outputs)

From ONNX to TRT

from modelci.hub.converter import TRTConverter
from modelci.types.bo import IOShape

onnx_path = ...
save_path = ...
inputs = [IOShape([...], dtype=...), ...]
outputs = [IOShape([...], dtype=...), ...]

TRTConverter.from_onnx(onnx_path, save_path, inputs=inputs, outputs=outputs)