From 1ff31a4b2960e9b7f125e96e7681ef372aad0211 Mon Sep 17 00:00:00 2001 From: Roman Bredehoft Date: Wed, 17 Jan 2024 17:44:57 +0100 Subject: [PATCH] chore: move onnx_svg_to_gitbook.svg to gitbook assets --- .../assets}/torch_to_numpy_with_onnx.svg | 0 docs/developer-guide/onnx_pipeline.md | 2 +- 2 files changed, 1 insertion(+), 1 deletion(-) rename docs/{_static/compilation-pipeline => .gitbook/assets}/torch_to_numpy_with_onnx.svg (100%) diff --git a/docs/_static/compilation-pipeline/torch_to_numpy_with_onnx.svg b/docs/.gitbook/assets/torch_to_numpy_with_onnx.svg similarity index 100% rename from docs/_static/compilation-pipeline/torch_to_numpy_with_onnx.svg rename to docs/.gitbook/assets/torch_to_numpy_with_onnx.svg diff --git a/docs/developer-guide/onnx_pipeline.md b/docs/developer-guide/onnx_pipeline.md index efc848427..91fb21076 100644 --- a/docs/developer-guide/onnx_pipeline.md +++ b/docs/developer-guide/onnx_pipeline.md @@ -19,7 +19,7 @@ All Concrete ML built-in models follow the same pattern for FHE conversion: Moreover, by passing a user provided `nn.Module` to step 2 of the above process, Concrete ML supports custom user models. See the associated [FHE-friendly model documentation](../deep-learning/fhe_friendly_models.md) for instructions about working with such models. -![Torch compilation flow with ONNX](../_static/compilation-pipeline/torch_to_numpy_with_onnx.svg) +![Torch compilation flow with ONNX](../.gitbook/assets/torch_to_numpy_with_onnx.svg) Once an ONNX model is imported, it is converted to a `NumpyModule`, then to a `QuantizedModule` and, finally, to an FHE circuit. However, as the diagram shows, it is perfectly possible to stop at the `NumpyModule` level if you just want to run the PyTorch model as NumPy code without doing quantization.