From d8e9e648055346a888eb53cd7b986141b9aae093 Mon Sep 17 00:00:00 2001 From: Luis Montero Date: Wed, 27 Dec 2023 13:11:59 +0100 Subject: [PATCH] docs: update api doc --- docs/developer-guide/api/README.md | 21 +- ...oncrete.ml.deployment.fhe_client_server.md | 16 +- .../api/concrete.ml.onnx.convert.md | 59 ++- .../api/concrete.ml.onnx.onnx_impl_utils.md | 44 +- ...ncrete.ml.onnx.onnx_model_manipulations.md | 18 +- .../api/concrete.ml.onnx.onnx_utils.md | 35 +- .../api/concrete.ml.onnx.ops_impl.md | 249 +++++++--- .../api/concrete.ml.pytest.torch_models.md | 458 ++++++++++++------ .../api/concrete.ml.pytest.utils.md | 25 + ...crete.ml.quantization.base_quantized_op.md | 9 +- .../concrete.ml.quantization.post_training.md | 10 +- ...ncrete.ml.quantization.quantized_module.md | 48 +- ...ml.quantization.quantized_module_passes.md | 8 +- .../concrete.ml.quantization.quantized_ops.md | 311 +++++++----- .../concrete.ml.quantization.quantizers.md | 106 ++-- .../api/concrete.ml.sklearn.base.md | 414 ++++++++-------- .../api/concrete.ml.sklearn.rf.md | 8 +- .../api/concrete.ml.sklearn.tree.md | 8 +- .../api/concrete.ml.sklearn.tree_to_numpy.md | 20 +- .../api/concrete.ml.sklearn.xgb.md | 16 +- 20 files changed, 1202 insertions(+), 681 deletions(-) diff --git a/docs/developer-guide/api/README.md b/docs/developer-guide/api/README.md index 3d4f97733..55cea26c5 100644 --- a/docs/developer-guide/api/README.md +++ b/docs/developer-guide/api/README.md @@ -67,6 +67,7 @@ - [`fhe_client_server.FHEModelServer`](./concrete.ml.deployment.fhe_client_server.md#class-fhemodelserver): Server API to load and run the FHE circuit. - [`ops_impl.ONNXMixedFunction`](./concrete.ml.onnx.ops_impl.md#class-onnxmixedfunction): A mixed quantized-raw valued onnx function. - [`ops_impl.RawOpOutput`](./concrete.ml.onnx.ops_impl.md#class-rawopoutput): Type construct that marks an ndarray as a raw output of a quantized op. +- [`torch_models.AddNet`](./concrete.ml.pytest.torch_models.md#class-addnet): Torch model that performs a simple addition between two inputs. - [`torch_models.BranchingGemmModule`](./concrete.ml.pytest.torch_models.md#class-branchinggemmmodule): Torch model with some branching and skip connections. - [`torch_models.BranchingModule`](./concrete.ml.pytest.torch_models.md#class-branchingmodule): Torch model with some branching and skip connections. - [`torch_models.CNN`](./concrete.ml.pytest.torch_models.md#class-cnn): Torch CNN model for the tests. @@ -76,14 +77,18 @@ - [`torch_models.CNNOther`](./concrete.ml.pytest.torch_models.md#class-cnnother): Torch CNN model for the tests. - [`torch_models.ConcatFancyIndexing`](./concrete.ml.pytest.torch_models.md#class-concatfancyindexing): Concat with fancy indexing. - [`torch_models.DoubleQuantQATMixNet`](./concrete.ml.pytest.torch_models.md#class-doublequantqatmixnet): Torch model that with two different quantizers on the input. +- [`torch_models.EncryptedMatrixMultiplicationModel`](./concrete.ml.pytest.torch_models.md#class-encryptedmatrixmultiplicationmodel): PyTorch module for performing matrix multiplication between two encrypted values. +- [`torch_models.ExpandModel`](./concrete.ml.pytest.torch_models.md#class-expandmodel): Minimalist network that expands the input tensor to a larger size. - [`torch_models.FC`](./concrete.ml.pytest.torch_models.md#class-fc): Torch model for the tests. - [`torch_models.FCSeq`](./concrete.ml.pytest.torch_models.md#class-fcseq): Torch model that should generate MatMul->Add ONNX patterns. - [`torch_models.FCSeqAddBiasVec`](./concrete.ml.pytest.torch_models.md#class-fcseqaddbiasvec): Torch model that should generate MatMul->Add ONNX patterns. - [`torch_models.FCSmall`](./concrete.ml.pytest.torch_models.md#class-fcsmall): Torch model for the tests. +- [`torch_models.ManualLogisticRegressionTraining`](./concrete.ml.pytest.torch_models.md#class-manuallogisticregressiontraining): PyTorch module for performing SGD training. - [`torch_models.MultiInputNN`](./concrete.ml.pytest.torch_models.md#class-multiinputnn): Torch model to test multiple inputs forward. - [`torch_models.MultiInputNNConfigurable`](./concrete.ml.pytest.torch_models.md#class-multiinputnnconfigurable): Torch model to test multiple inputs forward. - [`torch_models.MultiInputNNDifferentSize`](./concrete.ml.pytest.torch_models.md#class-multiinputnndifferentsize): Torch model to test multiple inputs with different shape in the forward pass. - [`torch_models.MultiOpOnSingleInputConvNN`](./concrete.ml.pytest.torch_models.md#class-multioponsingleinputconvnn): Network that applies two quantized operations on a single input. +- [`torch_models.MultiOutputModel`](./concrete.ml.pytest.torch_models.md#class-multioutputmodel): Multi-output model. - [`torch_models.NetWithConcatUnsqueeze`](./concrete.ml.pytest.torch_models.md#class-netwithconcatunsqueeze): Torch model to test the concat and unsqueeze operators. - [`torch_models.NetWithConstantsFoldedBeforeOps`](./concrete.ml.pytest.torch_models.md#class-netwithconstantsfoldedbeforeops): Torch QAT model that does not quantize the inputs. - [`torch_models.NetWithLoops`](./concrete.ml.pytest.torch_models.md#class-netwithloops): Torch model, where we reuse some elements in a loop. @@ -100,7 +105,6 @@ - [`torch_models.TinyQATCNN`](./concrete.ml.pytest.torch_models.md#class-tinyqatcnn): A very small QAT CNN to classify the sklearn digits data-set. - [`torch_models.TorchCustomModel`](./concrete.ml.pytest.torch_models.md#class-torchcustommodel): A small network with Brevitas, trained on make_classification. - [`torch_models.TorchSum`](./concrete.ml.pytest.torch_models.md#class-torchsum): Torch model to test the ReduceSum ONNX operator in a leveled circuit. -- [`torch_models.TorchSumMod`](./concrete.ml.pytest.torch_models.md#class-torchsummod): Torch model to test the ReduceSum ONNX operator in a circuit containing a PBS. - [`torch_models.UnivariateModule`](./concrete.ml.pytest.torch_models.md#class-univariatemodule): Torch model that calls univariate and shape functions of torch. - [`base_quantized_op.QuantizedMixingOp`](./concrete.ml.quantization.base_quantized_op.md#class-quantizedmixingop): An operator that mixes (adds or multiplies) together encrypted inputs. - [`base_quantized_op.QuantizedOp`](./concrete.ml.quantization.base_quantized_op.md#class-quantizedop): Base class for quantized ONNX ops implemented in numpy. @@ -126,8 +130,10 @@ - [`quantized_ops.QuantizedConv`](./concrete.ml.quantization.quantized_ops.md#class-quantizedconv): Quantized Conv op. - [`quantized_ops.QuantizedDiv`](./concrete.ml.quantization.quantized_ops.md#class-quantizeddiv): Div operator /. - [`quantized_ops.QuantizedElu`](./concrete.ml.quantization.quantized_ops.md#class-quantizedelu): Quantized Elu op. +- [`quantized_ops.QuantizedEqual`](./concrete.ml.quantization.quantized_ops.md#class-quantizedequal): Comparison operator ==. - [`quantized_ops.QuantizedErf`](./concrete.ml.quantization.quantized_ops.md#class-quantizederf): Quantized erf op. - [`quantized_ops.QuantizedExp`](./concrete.ml.quantization.quantized_ops.md#class-quantizedexp): Quantized Exp op. +- [`quantized_ops.QuantizedExpand`](./concrete.ml.quantization.quantized_ops.md#class-quantizedexpand): Expand operator for quantized tensors. - [`quantized_ops.QuantizedFlatten`](./concrete.ml.quantization.quantized_ops.md#class-quantizedflatten): Quantized flatten for encrypted inputs. - [`quantized_ops.QuantizedFloor`](./concrete.ml.quantization.quantized_ops.md#class-quantizedfloor): Quantized Floor op. - [`quantized_ops.QuantizedGemm`](./concrete.ml.quantization.quantized_ops.md#class-quantizedgemm): Quantized Gemm op. @@ -262,19 +268,23 @@ - [`utils.wait_for_connection_to_be_available`](./concrete.ml.deployment.utils.md#function-wait_for_connection_to_be_available): Wait for connection to be available. - [`convert.fuse_matmul_bias_to_gemm`](./concrete.ml.onnx.convert.md#function-fuse_matmul_bias_to_gemm): Fuse sequence of matmul -> add into a gemm node. - [`convert.get_equivalent_numpy_forward_from_onnx`](./concrete.ml.onnx.convert.md#function-get_equivalent_numpy_forward_from_onnx): Get the numpy equivalent forward of the provided ONNX model. +- [`convert.get_equivalent_numpy_forward_from_onnx_tree`](./concrete.ml.onnx.convert.md#function-get_equivalent_numpy_forward_from_onnx_tree): Get the numpy equivalent forward of the provided ONNX model for tree-based models only. - [`convert.get_equivalent_numpy_forward_from_torch`](./concrete.ml.onnx.convert.md#function-get_equivalent_numpy_forward_from_torch): Get the numpy equivalent forward of the provided torch Module. +- [`convert.preprocess_onnx_model`](./concrete.ml.onnx.convert.md#function-preprocess_onnx_model): Get the numpy equivalent forward of the provided ONNX model. - [`onnx_impl_utils.compute_conv_output_dims`](./concrete.ml.onnx.onnx_impl_utils.md#function-compute_conv_output_dims): Compute the output shape of a pool or conv operation. - [`onnx_impl_utils.compute_onnx_pool_padding`](./concrete.ml.onnx.onnx_impl_utils.md#function-compute_onnx_pool_padding): Compute any additional padding needed to compute pooling layers. - [`onnx_impl_utils.numpy_onnx_pad`](./concrete.ml.onnx.onnx_impl_utils.md#function-numpy_onnx_pad): Pad a tensor according to ONNX spec, using an optional custom pad value. - [`onnx_impl_utils.onnx_avgpool_compute_norm_const`](./concrete.ml.onnx.onnx_impl_utils.md#function-onnx_avgpool_compute_norm_const): Compute the average pooling normalization constant. -- [`onnx_model_manipulations.clean_graph_after_node_op_type`](./concrete.ml.onnx.onnx_model_manipulations.md#function-clean_graph_after_node_op_type): Clean the graph of the onnx model by removing nodes after the given node type. -- [`onnx_model_manipulations.clean_graph_at_node_op_type`](./concrete.ml.onnx.onnx_model_manipulations.md#function-clean_graph_at_node_op_type): Clean the graph of the onnx model by removing nodes at the given node type. +- [`onnx_impl_utils.rounded_comparison`](./concrete.ml.onnx.onnx_impl_utils.md#function-rounded_comparison): Comparison operation using `round_bit_pattern` function. +- [`onnx_model_manipulations.clean_graph_after_node_op_type`](./concrete.ml.onnx.onnx_model_manipulations.md#function-clean_graph_after_node_op_type): Remove the nodes following first node matching node_op_type from the ONNX graph. +- [`onnx_model_manipulations.clean_graph_at_node_op_type`](./concrete.ml.onnx.onnx_model_manipulations.md#function-clean_graph_at_node_op_type): Remove the first node matching node_op_type and its following nodes from the ONNX graph. - [`onnx_model_manipulations.keep_following_outputs_discard_others`](./concrete.ml.onnx.onnx_model_manipulations.md#function-keep_following_outputs_discard_others): Keep the outputs given in outputs_to_keep and remove the others from the model. - [`onnx_model_manipulations.remove_identity_nodes`](./concrete.ml.onnx.onnx_model_manipulations.md#function-remove_identity_nodes): Remove identity nodes from a model. - [`onnx_model_manipulations.remove_node_types`](./concrete.ml.onnx.onnx_model_manipulations.md#function-remove_node_types): Remove unnecessary nodes from the ONNX graph. - [`onnx_model_manipulations.remove_unused_constant_nodes`](./concrete.ml.onnx.onnx_model_manipulations.md#function-remove_unused_constant_nodes): Remove unused Constant nodes in the provided onnx model. - [`onnx_model_manipulations.simplify_onnx_model`](./concrete.ml.onnx.onnx_model_manipulations.md#function-simplify_onnx_model): Simplify an ONNX model, removes unused Constant nodes and Identity nodes. - [`onnx_utils.execute_onnx_with_numpy`](./concrete.ml.onnx.onnx_utils.md#function-execute_onnx_with_numpy): Execute the provided ONNX graph on the given inputs. +- [`onnx_utils.execute_onnx_with_numpy_trees`](./concrete.ml.onnx.onnx_utils.md#function-execute_onnx_with_numpy_trees): Execute the provided ONNX graph on the given inputs for tree-based models only. - [`onnx_utils.get_attribute`](./concrete.ml.onnx.onnx_utils.md#function-get_attribute): Get the attribute from an ONNX AttributeProto. - [`onnx_utils.get_op_type`](./concrete.ml.onnx.onnx_utils.md#function-get_op_type): Construct the qualified type name of the ONNX operator. - [`onnx_utils.remove_initializer_from_input`](./concrete.ml.onnx.onnx_utils.md#function-remove_initializer_from_input): Remove initializers from model inputs. @@ -299,6 +309,7 @@ - [`ops_impl.numpy_div`](./concrete.ml.onnx.ops_impl.md#function-numpy_div): Compute div in numpy according to ONNX spec. - [`ops_impl.numpy_elu`](./concrete.ml.onnx.ops_impl.md#function-numpy_elu): Compute elu in numpy according to ONNX spec. - [`ops_impl.numpy_equal`](./concrete.ml.onnx.ops_impl.md#function-numpy_equal): Compute equal in numpy according to ONNX spec. +- [`ops_impl.numpy_equal_float`](./concrete.ml.onnx.ops_impl.md#function-numpy_equal_float): Compute equal in numpy according to ONNX spec and cast outputs to floats. - [`ops_impl.numpy_erf`](./concrete.ml.onnx.ops_impl.md#function-numpy_erf): Compute erf in numpy according to ONNX spec. - [`ops_impl.numpy_exp`](./concrete.ml.onnx.ops_impl.md#function-numpy_exp): Compute exponential in numpy according to ONNX spec. - [`ops_impl.numpy_flatten`](./concrete.ml.onnx.ops_impl.md#function-numpy_flatten): Flatten a tensor into a 2d array. @@ -345,8 +356,12 @@ - [`ops_impl.numpy_where`](./concrete.ml.onnx.ops_impl.md#function-numpy_where): Compute the equivalent of numpy.where. - [`ops_impl.numpy_where_body`](./concrete.ml.onnx.ops_impl.md#function-numpy_where_body): Compute the equivalent of numpy.where. - [`ops_impl.onnx_func_raw_args`](./concrete.ml.onnx.ops_impl.md#function-onnx_func_raw_args): Decorate a numpy onnx function to flag the raw/non quantized inputs. +- [`ops_impl.rounded_numpy_equal_for_trees`](./concrete.ml.onnx.ops_impl.md#function-rounded_numpy_equal_for_trees): Compute rounded equal in numpy according to ONNX spec for tree-based models only. +- [`ops_impl.rounded_numpy_less_for_trees`](./concrete.ml.onnx.ops_impl.md#function-rounded_numpy_less_for_trees): Compute rounded less in numpy according to ONNX spec for tree-based models only. +- [`ops_impl.rounded_numpy_less_or_equal_for_trees`](./concrete.ml.onnx.ops_impl.md#function-rounded_numpy_less_or_equal_for_trees): Compute rounded less or equal in numpy according to ONNX spec for tree-based models only. - [`utils.check_serialization`](./concrete.ml.pytest.utils.md#function-check_serialization): Check that the given object can properly be serialized. - [`utils.data_calibration_processing`](./concrete.ml.pytest.utils.md#function-data_calibration_processing): Reduce size of the given data-set. +- [`utils.get_random_samples`](./concrete.ml.pytest.utils.md#function-get_random_samples): Select `n_sample` random elements from a 2D NumPy array. - [`utils.get_sklearn_all_models_and_datasets`](./concrete.ml.pytest.utils.md#function-get_sklearn_all_models_and_datasets): Get the pytest parameters to use for testing all models available in Concrete ML. - [`utils.get_sklearn_linear_models_and_datasets`](./concrete.ml.pytest.utils.md#function-get_sklearn_linear_models_and_datasets): Get the pytest parameters to use for testing linear models. - [`utils.get_sklearn_neighbors_models_and_datasets`](./concrete.ml.pytest.utils.md#function-get_sklearn_neighbors_models_and_datasets): Get the pytest parameters to use for testing neighbor models. diff --git a/docs/developer-guide/api/concrete.ml.deployment.fhe_client_server.md b/docs/developer-guide/api/concrete.ml.deployment.fhe_client_server.md index 65f2aa8d8..812acc191 100644 --- a/docs/developer-guide/api/concrete.ml.deployment.fhe_client_server.md +++ b/docs/developer-guide/api/concrete.ml.deployment.fhe_client_server.md @@ -135,13 +135,13 @@ Export all needed artifacts for the client and server. ______________________________________________________________________ - + ## class `FHEModelClient` Client API to encrypt and decrypt FHE data. - + ### method `__init__` @@ -158,7 +158,7 @@ Initialize the FHE API. ______________________________________________________________________ - + ### method `deserialize_decrypt` @@ -178,7 +178,7 @@ Deserialize and decrypt the values. ______________________________________________________________________ - + ### method `deserialize_decrypt_dequantize` @@ -200,7 +200,7 @@ Deserialize, decrypt and de-quantize the values. ______________________________________________________________________ - + ### method `generate_private_and_evaluation_keys` @@ -216,7 +216,7 @@ Generate the private and evaluation keys. ______________________________________________________________________ - + ### method `get_serialized_evaluation_keys` @@ -232,7 +232,7 @@ Get the serialized evaluation keys. ______________________________________________________________________ - + ### method `load` @@ -244,7 +244,7 @@ Load the quantizers along with the FHE specs. ______________________________________________________________________ - + ### method `quantize_encrypt_serialize` diff --git a/docs/developer-guide/api/concrete.ml.onnx.convert.md b/docs/developer-guide/api/concrete.ml.onnx.convert.md index 53a0c820a..a6e46074d 100644 --- a/docs/developer-guide/api/concrete.ml.onnx.convert.md +++ b/docs/developer-guide/api/concrete.ml.onnx.convert.md @@ -13,7 +13,7 @@ ONNX conversion related code. ______________________________________________________________________ - + ## function `fuse_matmul_bias_to_gemm` @@ -33,7 +33,7 @@ Fuse sequence of matmul -> add into a gemm node. ______________________________________________________________________ - + ## function `get_equivalent_numpy_forward_from_torch` @@ -59,7 +59,32 @@ Get the numpy equivalent forward of the provided torch Module. ______________________________________________________________________ - + + +## function `preprocess_onnx_model` + +```python +preprocess_onnx_model(onnx_model: ModelProto, check_model: bool) → ModelProto +``` + +Get the numpy equivalent forward of the provided ONNX model. + +**Args:** + +- `onnx_model` (onnx.ModelProto): the ONNX model for which to get the equivalent numpy forward. +- `check_model` (bool): set to True to run the onnx checker on the model. Defaults to True. + +**Raises:** + +- `ValueError`: Raised if there is an unsupported ONNX operator required to convert the torch model to numpy. + +**Returns:** + +- `onnx.ModelProto`: The preprocessed ONNX model. + +______________________________________________________________________ + + ## function `get_equivalent_numpy_forward_from_onnx` @@ -77,10 +102,32 @@ Get the numpy equivalent forward of the provided ONNX model. - `onnx_model` (onnx.ModelProto): the ONNX model for which to get the equivalent numpy forward. - `check_model` (bool): set to True to run the onnx checker on the model. Defaults to True. -**Raises:** +**Returns:** -- `ValueError`: Raised if there is an unsupported ONNX operator required to convert the torch model to numpy. +- `Callable[..., Tuple[numpy.ndarray, ...]]`: The function that will execute the equivalent numpy function. + +______________________________________________________________________ + + + +## function `get_equivalent_numpy_forward_from_onnx_tree` + +```python +get_equivalent_numpy_forward_from_onnx_tree( + onnx_model: ModelProto, + check_model: bool = True, + lsbs_to_remove_for_trees: Optional[Tuple[int, int]] = None +) → Tuple[Callable[, Tuple[ndarray, ]], ModelProto] +``` + +Get the numpy equivalent forward of the provided ONNX model for tree-based models only. + +**Args:** + +- `onnx_model` (onnx.ModelProto): the ONNX model for which to get the equivalent numpy forward. +- `check_model` (bool): set to True to run the onnx checker on the model. Defaults to True. +- `lsbs_to_remove_for_trees` (Optional\[Tuple\[int, int\]\]): This parameter is exclusively used for optimizing tree-based models. It contains the values of the least significant bits to remove during the tree traversal, where the first value refers to the first comparison (either "less" or "less_or_equal"), while the second value refers to the "Equal" comparison operation. Default to None, as it is not applicable to other types of models. **Returns:** -- `Callable[..., Tuple[numpy.ndarray, ...]]`: The function that will execute the equivalent numpy function. +- `Tuple[Callable[..., Tuple[numpy.ndarray, ...]], onnx.ModelProto]`: The function that will execute the equivalent numpy function. diff --git a/docs/developer-guide/api/concrete.ml.onnx.onnx_impl_utils.md b/docs/developer-guide/api/concrete.ml.onnx.onnx_impl_utils.md index e9defca19..77d914870 100644 --- a/docs/developer-guide/api/concrete.ml.onnx.onnx_impl_utils.md +++ b/docs/developer-guide/api/concrete.ml.onnx.onnx_impl_utils.md @@ -8,7 +8,7 @@ Utility functions for onnx operator implementations. ______________________________________________________________________ - + ## function `numpy_onnx_pad` @@ -18,7 +18,7 @@ numpy_onnx_pad( pads: Tuple[int, ], pad_value: Union[float, int, ndarray] = 0, int_only: bool = False -) → ndarray +) → Union[ndarray, Tracer] ``` Pad a tensor according to ONNX spec, using an optional custom pad value. @@ -36,7 +36,7 @@ Pad a tensor according to ONNX spec, using an optional custom pad value. ______________________________________________________________________ - + ## function `compute_conv_output_dims` @@ -68,7 +68,7 @@ See https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html for detail ______________________________________________________________________ - + ## function `compute_onnx_pool_padding` @@ -100,7 +100,7 @@ The ONNX standard uses ceil_mode=1 to match TensorFlow style pooling output comp ______________________________________________________________________ - + ## function `onnx_avgpool_compute_norm_const` @@ -111,7 +111,7 @@ onnx_avgpool_compute_norm_const( pads: Tuple[int, ], strides: Tuple[int, ], ceil_mode: int -) → Union[ndarray, float] +) → Union[ndarray, float, Tracer] ``` Compute the average pooling normalization constant. @@ -129,3 +129,35 @@ This constant can be a tensor of the same shape as the input or a scalar. **Returns:** - `res` (float): tensor or scalar, corresponding to normalization factors to apply for the average pool computation for each valid kernel position + +______________________________________________________________________ + + + +## function `rounded_comparison` + +```python +rounded_comparison( + x: ndarray, + y: ndarray, + lsbs_to_remove: int, + operation: Callable[[int], bool] +) → Tuple[bool] +``` + +Comparison operation using `round_bit_pattern` function. + +`round_bit_pattern` rounds the bit pattern of an integer to the closer It also checks for any potential overflow. If so, it readjusts the LSBs accordingly. + +The parameter `lsbs_to_remove` in `round_bit_pattern` can either be an integer specifying the number of LSBS to remove, or an `AutoRounder` object that determines the required number of LSBs based on the specified number of MSBs to retain. But in our case, we choose to compute the LSBs manually. + +**Args:** + +- `x` (numpy.ndarray): Input tensor +- `y` (numpy.ndarray): Input tensor +- `lsbs_to_remove` (int): Number of the least significant bits to remove +- `operation` (ComparisonOperationType): Comparison operation, which can `<`, `<=` and `==` + +**Returns:** + +- `Tuple[bool]`: If x and y satisfy the comparison operator. diff --git a/docs/developer-guide/api/concrete.ml.onnx.onnx_model_manipulations.md b/docs/developer-guide/api/concrete.ml.onnx.onnx_model_manipulations.md index e759d391e..cae6838ef 100644 --- a/docs/developer-guide/api/concrete.ml.onnx.onnx_model_manipulations.md +++ b/docs/developer-guide/api/concrete.ml.onnx.onnx_model_manipulations.md @@ -109,23 +109,21 @@ clean_graph_at_node_op_type( ) ``` -Clean the graph of the onnx model by removing nodes at the given node type. - -Note: the specified node_type is also removed. +Remove the first node matching node_op_type and its following nodes from the ONNX graph. **Args:** - `onnx_model` (onnx.ModelProto): The onnx model. -- `node_op_type` (str): The node's op_type whose following nodes will be removed. -- `fail_if_not_found` (bool): If true, abort if the node op_type is not found +- `node_op_type` (str): The node's op_type whose following nodes as well as itself will be removed. +- `fail_if_not_found` (bool): If true, raise an error if no node matched the given op_type. **Raises:** -- `ValueError`: if fail_if_not_found is set +- `ValueError`: If no node matched the given op_type and fail_if_not_found is set to True ______________________________________________________________________ - + ## function `clean_graph_after_node_op_type` @@ -137,14 +135,14 @@ clean_graph_after_node_op_type( ) ``` -Clean the graph of the onnx model by removing nodes after the given node type. +Remove the nodes following first node matching node_op_type from the ONNX graph. **Args:** - `onnx_model` (onnx.ModelProto): The onnx model. - `node_op_type` (str): The node's op_type whose following nodes will be removed. -- `fail_if_not_found` (bool): If true, abort if the node op_type is not found +- `fail_if_not_found` (bool): If true, raise an error if no node matched the given op_type. **Raises:** -- `ValueError`: if the node op_type is not found and if fail_if_not_found is set +- `ValueError`: If no node matched the given op_type and fail_if_not_found is set to True diff --git a/docs/developer-guide/api/concrete.ml.onnx.onnx_utils.md b/docs/developer-guide/api/concrete.ml.onnx.onnx_utils.md index c9666fee7..81b8872e3 100644 --- a/docs/developer-guide/api/concrete.ml.onnx.onnx_utils.md +++ b/docs/developer-guide/api/concrete.ml.onnx.onnx_utils.md @@ -13,12 +13,13 @@ Utils to interpret an ONNX model with numpy. - **ONNX_OPS_TO_NUMPY_IMPL** - **ONNX_COMPARISON_OPS_TO_NUMPY_IMPL_FLOAT** - **ONNX_COMPARISON_OPS_TO_NUMPY_IMPL_BOOL** +- **ONNX_COMPARISON_OPS_TO_ROUNDED_TREES_NUMPY_IMPL_BOOL** - **ONNX_OPS_TO_NUMPY_IMPL_BOOL** - **IMPLEMENTED_ONNX_OPS** ______________________________________________________________________ - + ## function `get_attribute` @@ -38,7 +39,7 @@ Get the attribute from an ONNX AttributeProto. ______________________________________________________________________ - + ## function `get_op_type` @@ -58,7 +59,7 @@ Construct the qualified type name of the ONNX operator. ______________________________________________________________________ - + ## function `execute_onnx_with_numpy` @@ -79,7 +80,33 @@ Execute the provided ONNX graph on the given inputs. ______________________________________________________________________ - + + +## function `execute_onnx_with_numpy_trees` + +```python +execute_onnx_with_numpy_trees( + graph: GraphProto, + lsbs_to_remove_for_trees: Optional[Tuple[int, int]], + *inputs: ndarray +) → Tuple[ndarray, ] +``` + +Execute the provided ONNX graph on the given inputs for tree-based models only. + +**Args:** + +- `graph` (onnx.GraphProto): The ONNX graph to execute. +- `lsbs_to_remove_for_trees` (Optional\[Tuple\[int, int\]\]): This parameter is exclusively used for optimizing tree-based models. It contains the values of the least significant bits to remove during the tree traversal, where the first value refers to the first comparison (either "less" or "less_or_equal"), while the second value refers to the "Equal" comparison operation. Default to None. +- `*inputs`: The inputs of the graph. + +**Returns:** + +- `Tuple[numpy.ndarray]`: The result of the graph's execution. + +______________________________________________________________________ + + ## function `remove_initializer_from_input` diff --git a/docs/developer-guide/api/concrete.ml.onnx.ops_impl.md b/docs/developer-guide/api/concrete.ml.onnx.ops_impl.md index b1e003e0e..2278ee8d7 100644 --- a/docs/developer-guide/api/concrete.ml.onnx.ops_impl.md +++ b/docs/developer-guide/api/concrete.ml.onnx.ops_impl.md @@ -8,7 +8,7 @@ ONNX ops implementation in Python + NumPy. ______________________________________________________________________ - + ## function `cast_to_float` @@ -28,7 +28,7 @@ Cast values to floating points. ______________________________________________________________________ - + ## function `onnx_func_raw_args` @@ -49,7 +49,7 @@ Decorate a numpy onnx function to flag the raw/non quantized inputs. ______________________________________________________________________ - + ## function `numpy_where_body` @@ -73,7 +73,7 @@ This function is not mapped to any ONNX operator (as opposed to numpy_where). It ______________________________________________________________________ - + ## function `numpy_where` @@ -95,7 +95,7 @@ Compute the equivalent of numpy.where. ______________________________________________________________________ - + ## function `numpy_add` @@ -118,7 +118,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Add-13 ______________________________________________________________________ - + ## function `numpy_constant` @@ -140,7 +140,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Constant-13 ______________________________________________________________________ - + ## function `numpy_gemm` @@ -176,7 +176,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Gemm-13 ______________________________________________________________________ - + ## function `numpy_matmul` @@ -199,7 +199,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#MatMul-13 ______________________________________________________________________ - + ## function `numpy_relu` @@ -221,7 +221,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Relu-14 ______________________________________________________________________ - + ## function `numpy_sigmoid` @@ -243,7 +243,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sigmoid-13 ______________________________________________________________________ - + ## function `numpy_softmax` @@ -269,7 +269,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#softmax-13 ______________________________________________________________________ - + ## function `numpy_cos` @@ -291,7 +291,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cos-7 ______________________________________________________________________ - + ## function `numpy_cosh` @@ -313,7 +313,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cosh-9 ______________________________________________________________________ - + ## function `numpy_sin` @@ -335,7 +335,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sin-7 ______________________________________________________________________ - + ## function `numpy_sinh` @@ -357,7 +357,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sinh-9 ______________________________________________________________________ - + ## function `numpy_tan` @@ -379,7 +379,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tan-7 ______________________________________________________________________ - + ## function `numpy_tanh` @@ -401,7 +401,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tanh-13 ______________________________________________________________________ - + ## function `numpy_acos` @@ -423,7 +423,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acos-7 ______________________________________________________________________ - + ## function `numpy_acosh` @@ -445,7 +445,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acosh-9 ______________________________________________________________________ - + ## function `numpy_asin` @@ -467,7 +467,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asin-7 ______________________________________________________________________ - + ## function `numpy_asinh` @@ -489,7 +489,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asinh-9 ______________________________________________________________________ - + ## function `numpy_atan` @@ -511,7 +511,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atan-7 ______________________________________________________________________ - + ## function `numpy_atanh` @@ -533,7 +533,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atanh-9 ______________________________________________________________________ - + ## function `numpy_elu` @@ -556,7 +556,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Elu-6 ______________________________________________________________________ - + ## function `numpy_selu` @@ -584,7 +584,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Selu-6 ______________________________________________________________________ - + ## function `numpy_celu` @@ -607,7 +607,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Celu-12 ______________________________________________________________________ - + ## function `numpy_leakyrelu` @@ -630,7 +630,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LeakyRelu-6 ______________________________________________________________________ - + ## function `numpy_thresholdedrelu` @@ -653,7 +653,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#ThresholdedRelu-10 ______________________________________________________________________ - + ## function `numpy_hardsigmoid` @@ -681,7 +681,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#HardSigmoid-6 ______________________________________________________________________ - + ## function `numpy_softplus` @@ -703,7 +703,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Softplus-1 ______________________________________________________________________ - + ## function `numpy_abs` @@ -725,7 +725,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Abs-13 ______________________________________________________________________ - + ## function `numpy_div` @@ -748,7 +748,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Div-14 ______________________________________________________________________ - + ## function `numpy_mul` @@ -771,7 +771,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Mul-14 ______________________________________________________________________ - + ## function `numpy_sub` @@ -794,7 +794,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sub-14 ______________________________________________________________________ - + ## function `numpy_log` @@ -816,7 +816,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Log-13 ______________________________________________________________________ - + ## function `numpy_erf` @@ -838,7 +838,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Erf-13 ______________________________________________________________________ - + ## function `numpy_hardswish` @@ -860,7 +860,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#hardswish-14 ______________________________________________________________________ - + ## function `numpy_exp` @@ -882,7 +882,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Exp-13 ______________________________________________________________________ - + ## function `numpy_equal` @@ -905,7 +905,58 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Equal-11 ______________________________________________________________________ - + + +## function `rounded_numpy_equal_for_trees` + +```python +rounded_numpy_equal_for_trees( + x: ndarray, + y: ndarray, + lsbs_to_remove_for_trees: Optional[int] = None +) → Tuple[ndarray] +``` + +Compute rounded equal in numpy according to ONNX spec for tree-based models only. + +See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Equal-11 + +**Args:** + +- `x` (numpy.ndarray): Input tensor +- `y` (numpy.ndarray): Input tensor +- `lsbs_to_remove_for_trees` (Optional\[int\]): Number of the least significant bits to remove for tree-based models only. + +**Returns:** + +- `Tuple[numpy.ndarray]`: Output tensor + +______________________________________________________________________ + + + +## function `numpy_equal_float` + +```python +numpy_equal_float(x: ndarray, y: ndarray) → Tuple[ndarray] +``` + +Compute equal in numpy according to ONNX spec and cast outputs to floats. + +See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Equal-13 + +**Args:** + +- `x` (numpy.ndarray): Input tensor +- `y` (numpy.ndarray): Input tensor + +**Returns:** + +- `Tuple[numpy.ndarray]`: Output tensor + +______________________________________________________________________ + + ## function `numpy_not` @@ -927,7 +978,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1 ______________________________________________________________________ - + ## function `numpy_not_float` @@ -949,7 +1000,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1 ______________________________________________________________________ - + ## function `numpy_greater` @@ -972,7 +1023,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13 ______________________________________________________________________ - + ## function `numpy_greater_float` @@ -995,7 +1046,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13 ______________________________________________________________________ - + ## function `numpy_greater_or_equal` @@ -1018,7 +1069,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12 ______________________________________________________________________ - + ## function `numpy_greater_or_equal_float` @@ -1041,7 +1092,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12 ______________________________________________________________________ - + ## function `numpy_less` @@ -1064,7 +1115,35 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13 ______________________________________________________________________ - + + +## function `rounded_numpy_less_for_trees` + +```python +rounded_numpy_less_for_trees( + x: ndarray, + y: ndarray, + lsbs_to_remove_for_trees: Optional[int] = None +) → Tuple[ndarray] +``` + +Compute rounded less in numpy according to ONNX spec for tree-based models only. + +See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13 + +**Args:** + +- `x` (numpy.ndarray): Input tensor +- `y` (numpy.ndarray): Input tensor +- `lsbs_to_remove_for_trees` (Optional\[int\]): Number of the least significant bits to remove for tree-based models only. + +**Returns:** + +- `Tuple[numpy.ndarray]`: Output tensor + +______________________________________________________________________ + + ## function `numpy_less_float` @@ -1087,7 +1166,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13 ______________________________________________________________________ - + ## function `numpy_less_or_equal` @@ -1110,7 +1189,35 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12 ______________________________________________________________________ - + + +## function `rounded_numpy_less_or_equal_for_trees` + +```python +rounded_numpy_less_or_equal_for_trees( + x: ndarray, + y: ndarray, + lsbs_to_remove_for_trees: Optional[int] = None +) → Tuple[ndarray] +``` + +Compute rounded less or equal in numpy according to ONNX spec for tree-based models only. + +See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12 + +**Args:** + +- `x` (numpy.ndarray): Input tensor +- `y` (numpy.ndarray): Input tensor +- `lsbs_to_remove_for_trees` (Optional\[int\]): Number of the least significant bits to remove for tree-based models only. + +**Returns:** + +- `Tuple[numpy.ndarray]`: Output tensor + +______________________________________________________________________ + + ## function `numpy_less_or_equal_float` @@ -1133,7 +1240,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12 ______________________________________________________________________ - + ## function `numpy_identity` @@ -1155,7 +1262,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Identity-14 ______________________________________________________________________ - + ## function `numpy_transpose` @@ -1178,7 +1285,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Transpose-13 ______________________________________________________________________ - + ## function `numpy_conv` @@ -1219,7 +1326,7 @@ See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Conv ______________________________________________________________________ - + ## function `numpy_avgpool` @@ -1258,7 +1365,7 @@ See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#AveragePool ______________________________________________________________________ - + ## function `numpy_maxpool` @@ -1299,7 +1406,7 @@ See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#MaxPool ______________________________________________________________________ - + ## function `numpy_cast` @@ -1324,7 +1431,7 @@ See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Cast ______________________________________________________________________ - + ## function `numpy_batchnorm` @@ -1362,11 +1469,11 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#BatchNormalization- **Returns:** -- `numpy.ndarray`: Normalized tensor +- `numpy.ndarray`: Normalized tenso ______________________________________________________________________ - + ## function `numpy_flatten` @@ -1389,7 +1496,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Flatten-13. ______________________________________________________________________ - + ## function `numpy_or` @@ -1412,7 +1519,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7 ______________________________________________________________________ - + ## function `numpy_or_float` @@ -1435,7 +1542,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7 ______________________________________________________________________ - + ## function `numpy_round` @@ -1457,7 +1564,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Round-11 Remark tha ______________________________________________________________________ - + ## function `numpy_pow` @@ -1480,7 +1587,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Pow-13 ______________________________________________________________________ - + ## function `numpy_floor` @@ -1502,7 +1609,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Floor-1 ______________________________________________________________________ - + ## function `numpy_max` @@ -1527,7 +1634,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1 ______________________________________________________________________ - + ## function `numpy_min` @@ -1552,7 +1659,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1 ______________________________________________________________________ - + ## function `numpy_sign` @@ -1574,7 +1681,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9 ______________________________________________________________________ - + ## function `numpy_neg` @@ -1596,7 +1703,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9 ______________________________________________________________________ - + ## function `numpy_concatenate` @@ -1619,7 +1726,7 @@ See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#concat-13 ______________________________________________________________________ - + ## class `RawOpOutput` @@ -1627,7 +1734,7 @@ Type construct that marks an ndarray as a raw output of a quantized op. ______________________________________________________________________ - + ## class `ONNXMixedFunction` @@ -1635,7 +1742,7 @@ A mixed quantized-raw valued onnx function. ONNX functions will take inputs which can be either quantized or float. Some functions only take quantized inputs, but some functions take both types. For mixed functions we need to tag the parameters that do not need quantization. Thus quantized ops can know which inputs are not QuantizedArray and we avoid unnecessary wrapping of float values as QuantizedArrays. - + ### method `__init__` diff --git a/docs/developer-guide/api/concrete.ml.pytest.torch_models.md b/docs/developer-guide/api/concrete.ml.pytest.torch_models.md index 1c39891ad..1d1687e92 100644 --- a/docs/developer-guide/api/concrete.ml.pytest.torch_models.md +++ b/docs/developer-guide/api/concrete.ml.pytest.torch_models.md @@ -10,11 +10,50 @@ ______________________________________________________________________ +## class `MultiOutputModel` + +Multi-output model. + + + +### method `__init__` + +```python +__init__() → None +``` + +Torch Model. + +______________________________________________________________________ + + + +### method `forward` + +```python +forward(x, y) +``` + +Forward pass. + +**Args:** + +- `x` (torch.Tensor): The input of the model. +- `y` (torch.Tensor): The input of the model. + +**Returns:** + +- `Tuple[torch.Tensor. torch.Tensor]`: Output of the network. + +______________________________________________________________________ + + + ## class `SimpleNet` Fake torch model used to generate some onnx. - + ### method `__init__` @@ -24,7 +63,7 @@ __init__() → None ______________________________________________________________________ - + ### method `forward` @@ -44,13 +83,13 @@ Forward function. ______________________________________________________________________ - + ## class `FCSmall` Torch model for the tests. - + ### method `__init__` @@ -60,7 +99,7 @@ __init__(input_output, activation_function) ______________________________________________________________________ - + ### method `forward` @@ -79,13 +118,13 @@ the output of the NN ______________________________________________________________________ - + ## class `FC` Torch model for the tests. - + ### method `__init__` @@ -95,7 +134,7 @@ __init__(activation_function, input_output=3072) ______________________________________________________________________ - + ### method `forward` @@ -114,13 +153,13 @@ the output of the NN ______________________________________________________________________ - + ## class `CNN` Torch CNN model for the tests. - + ### method `__init__` @@ -130,7 +169,7 @@ __init__(input_output, activation_function) ______________________________________________________________________ - + ### method `forward` @@ -149,13 +188,13 @@ the output of the NN ______________________________________________________________________ - + ## class `CNNMaxPool` Torch CNN model for the tests with a max pool. - + ### method `__init__` @@ -165,7 +204,7 @@ __init__(input_output, activation_function) ______________________________________________________________________ - + ### method `forward` @@ -184,13 +223,13 @@ the output of the NN ______________________________________________________________________ - + ## class `CNNOther` Torch CNN model for the tests. - + ### method `__init__` @@ -200,7 +239,7 @@ __init__(input_output, activation_function) ______________________________________________________________________ - + ### method `forward` @@ -219,13 +258,13 @@ the output of the NN ______________________________________________________________________ - + ## class `CNNInvalid` Torch CNN model for the tests. - + ### method `__init__` @@ -235,7 +274,7 @@ __init__(activation_function, groups) ______________________________________________________________________ - + ### method `forward` @@ -254,13 +293,13 @@ the output of the NN ______________________________________________________________________ - + ## class `CNNGrouped` Torch CNN model with grouped convolution for compile torch tests. - + ### method `__init__` @@ -270,7 +309,7 @@ __init__(input_output, activation_function, groups) ______________________________________________________________________ - + ### method `forward` @@ -289,7 +328,7 @@ the output of the NN ______________________________________________________________________ - + ## class `NetWithLoops` @@ -297,7 +336,7 @@ Torch model, where we reuse some elements in a loop. Torch model, where we reuse some elements in a loop in the forward and don't expect the user to define these elements in a particular order. - + ### method `__init__` @@ -307,7 +346,7 @@ __init__(activation_function, input_output, n_fc_layers) ______________________________________________________________________ - + ### method `forward` @@ -326,13 +365,13 @@ the output of the NN ______________________________________________________________________ - + ## class `MultiInputNN` Torch model to test multiple inputs forward. - + ### method `__init__` @@ -342,7 +381,7 @@ __init__(input_output, activation_function) ______________________________________________________________________ - + ### method `forward` @@ -362,13 +401,13 @@ the output of the NN ______________________________________________________________________ - + ## class `MultiInputNNConfigurable` Torch model to test multiple inputs forward. - + ### method `__init__` @@ -378,7 +417,7 @@ __init__(use_conv, use_qat, input_output, n_bits) ______________________________________________________________________ - + ### method `forward` @@ -398,13 +437,13 @@ the output of the NN ______________________________________________________________________ - + ## class `MultiInputNNDifferentSize` Torch model to test multiple inputs with different shape in the forward pass. - + ### method `__init__` @@ -419,7 +458,7 @@ __init__( ______________________________________________________________________ - + ### method `forward` @@ -439,13 +478,13 @@ The output of the NN. ______________________________________________________________________ - + ## class `BranchingModule` Torch model with some branching and skip connections. - + ### method `__init__` @@ -455,7 +494,7 @@ __init__(input_output, activation_function) ______________________________________________________________________ - + ### method `forward` @@ -474,13 +513,13 @@ the output of the NN ______________________________________________________________________ - + ## class `BranchingGemmModule` Torch model with some branching and skip connections. - + ### method `__init__` @@ -490,7 +529,7 @@ __init__(input_output, activation_function) ______________________________________________________________________ - + ### method `forward` @@ -509,13 +548,13 @@ the output of the NN ______________________________________________________________________ - + ## class `UnivariateModule` Torch model that calls univariate and shape functions of torch. - + ### method `__init__` @@ -525,7 +564,7 @@ __init__(input_output, activation_function) ______________________________________________________________________ - + ### method `forward` @@ -544,13 +583,13 @@ the output of the NN ______________________________________________________________________ - + ## class `StepActivationModule` Torch model implements a step function that needs Greater, Cast and Where. - + ### method `__init__` @@ -560,7 +599,7 @@ __init__(input_output, activation_function) ______________________________________________________________________ - + ### method `forward` @@ -579,13 +618,13 @@ the output of the NN ______________________________________________________________________ - + ## class `NetWithConcatUnsqueeze` Torch model to test the concat and unsqueeze operators. - + ### method `__init__` @@ -595,7 +634,7 @@ __init__(activation_function, input_output, n_fc_layers) ______________________________________________________________________ - + ### method `forward` @@ -614,13 +653,13 @@ the output of the NN ______________________________________________________________________ - + ## class `MultiOpOnSingleInputConvNN` Network that applies two quantized operations on a single input. - + ### method `__init__` @@ -630,7 +669,7 @@ __init__(can_remove_input_tlu: bool) ______________________________________________________________________ - + ### method `forward` @@ -649,7 +688,7 @@ the output of the NN ______________________________________________________________________ - + ## class `FCSeq` @@ -657,7 +696,7 @@ Torch model that should generate MatMul->Add ONNX patterns. This network generates additions with a constant scalar - + ### method `__init__` @@ -667,7 +706,7 @@ __init__(input_output, act) ______________________________________________________________________ - + ### method `forward` @@ -686,7 +725,7 @@ the output of the NN ______________________________________________________________________ - + ## class `FCSeqAddBiasVec` @@ -694,7 +733,7 @@ Torch model that should generate MatMul->Add ONNX patterns. This network tests the addition with a constant vector - + ### method `__init__` @@ -704,7 +743,7 @@ __init__(input_output, act) ______________________________________________________________________ - + ### method `forward` @@ -723,13 +762,13 @@ the output of the NN ______________________________________________________________________ - + ## class `TinyCNN` A very small CNN. - + ### method `__init__` @@ -746,7 +785,7 @@ Create the tiny CNN with two conv layers. ______________________________________________________________________ - + ### method `forward` @@ -765,7 +804,7 @@ the output of the NN ______________________________________________________________________ - + ## class `TinyQATCNN` @@ -773,7 +812,7 @@ A very small QAT CNN to classify the sklearn digits data-set. This class also allows pruning to a maximum of 10 active neurons, which should help keep the accumulator bit-width low. - + ### method `__init__` @@ -801,7 +840,7 @@ Construct the CNN with a configurable number of classes. ______________________________________________________________________ - + ### method `forward` @@ -820,7 +859,7 @@ the output of the NN ______________________________________________________________________ - + ### method `toggle_pruning` @@ -836,13 +875,13 @@ Enable or remove pruning. ______________________________________________________________________ - + ## class `SimpleQAT` Torch model implements a step function that needs Greater, Cast and Where. - + ### method `__init__` @@ -852,7 +891,7 @@ __init__(input_output, activation_function, n_bits=2, disable_bit_check=False) ______________________________________________________________________ - + ### method `forward` @@ -871,13 +910,13 @@ the output of the NN ______________________________________________________________________ - + ## class `QATTestModule` Torch model that implements a simple non-uniform quantizer. - + ### method `__init__` @@ -887,7 +926,7 @@ __init__(activation_function) ______________________________________________________________________ - + ### method `forward` @@ -906,13 +945,13 @@ the output of the NN ______________________________________________________________________ - + ## class `SingleMixNet` Torch model that with a single conv layer that produces the output, e.g., a blur filter. - + ### method `__init__` @@ -922,7 +961,7 @@ __init__(use_conv, use_qat, inp_size, n_bits) ______________________________________________________________________ - + ### method `forward` @@ -941,7 +980,7 @@ the output of the NN ______________________________________________________________________ - + ## class `DoubleQuantQATMixNet` @@ -949,7 +988,7 @@ Torch model that with two different quantizers on the input. Used to test that it keeps the input TLU. - + ### method `__init__` @@ -959,7 +998,7 @@ __init__(use_conv, use_qat, inp_size, n_bits) ______________________________________________________________________ - + ### method `forward` @@ -978,61 +1017,18 @@ the output of the NN ______________________________________________________________________ - + ## class `TorchSum` Torch model to test the ReduceSum ONNX operator in a leveled circuit. - - -### method `__init__` - -```python -__init__(dim=(0,), keepdim=True) -``` - -Initialize the module. - -**Args:** - -- `dim` (Tuple\[int\]): The axis along which the sum should be executed -- `keepdim` (bool): If the output should keep the same dimension as the input or not - -______________________________________________________________________ - - - -### method `forward` - -```python -forward(x) -``` - -Forward pass. - -**Args:** - -- `x` (torch.tensor): The input of the model - -**Returns:** - -- `torch_sum` (torch.tensor): The sum of the input's tensor elements along the given axis - -______________________________________________________________________ - - - -## class `TorchSumMod` - -Torch model to test the ReduceSum ONNX operator in a circuit containing a PBS. - - + ### method `__init__` ```python -__init__(dim=(0,), keepdim=True) +__init__(dim=(0,), keepdim=True, with_pbs=False) ``` Initialize the module. @@ -1041,10 +1037,11 @@ Initialize the module. - `dim` (Tuple\[int\]): The axis along which the sum should be executed - `keepdim` (bool): If the output should keep the same dimension as the input or not +- `with_pbs` (bool): If the forward function should be forced to consider at least one PBS ______________________________________________________________________ - + ### method `forward` @@ -1064,13 +1061,13 @@ Forward pass. ______________________________________________________________________ - + ## class `NetWithConstantsFoldedBeforeOps` Torch QAT model that does not quantize the inputs. - + ### method `__init__` @@ -1085,7 +1082,7 @@ __init__( ______________________________________________________________________ - + ### method `forward` @@ -1105,13 +1102,13 @@ Forward pass. ______________________________________________________________________ - + ## class `ShapeOperationsNet` Torch QAT model that reshapes the input. - + ### method `__init__` @@ -1121,7 +1118,7 @@ __init__(is_qat) ______________________________________________________________________ - + ### method `forward` @@ -1141,13 +1138,13 @@ Forward pass. ______________________________________________________________________ - + ## class `PaddingNet` Torch QAT model that applies various padding patterns. - + ### method `__init__` @@ -1157,7 +1154,7 @@ __init__() ______________________________________________________________________ - + ### method `forward` @@ -1177,13 +1174,13 @@ Forward pass. ______________________________________________________________________ - + ## class `QuantCustomModel` A small quantized network with Brevitas, trained on make_classification. - + ### method `__init__` @@ -1213,7 +1210,7 @@ Quantized Torch Model with Brevitas. ______________________________________________________________________ - + ### method `forward` @@ -1233,13 +1230,13 @@ Forward pass. ______________________________________________________________________ - + ## class `TorchCustomModel` A small network with Brevitas, trained on make_classification. - + ### method `__init__` @@ -1257,7 +1254,7 @@ Torch Model. ______________________________________________________________________ - + ### method `forward` @@ -1277,13 +1274,13 @@ Forward pass. ______________________________________________________________________ - + ## class `ConcatFancyIndexing` Concat with fancy indexing. - + ### method `__init__` @@ -1309,7 +1306,7 @@ Torch Model. ______________________________________________________________________ - + ### method `forward` @@ -1329,13 +1326,13 @@ Forward pass. ______________________________________________________________________ - + ## class `PartialQATModel` A model with a QAT Module. - + ### method `__init__` @@ -1345,7 +1342,7 @@ __init__(input_shape: int, output_shape: int, n_bits: int) ______________________________________________________________________ - + ### method `forward` @@ -1362,3 +1359,172 @@ Forward pass. **Returns:** - `torch.tensor`: Output of the network. + +______________________________________________________________________ + + + +## class `EncryptedMatrixMultiplicationModel` + +PyTorch module for performing matrix multiplication between two encrypted values. + + + +### method `__init__` + +```python +__init__(input_output, activation_function) +``` + +______________________________________________________________________ + + + +### method `forward` + +```python +forward(input1) +``` + +Forward function for matrix multiplication. + +**Args:** + +- `input1` (torch.Tensor): The first input tensor. + +**Returns:** + +- `torch.Tensor`: The result of the matrix multiplication. + +______________________________________________________________________ + + + +## class `ManualLogisticRegressionTraining` + +PyTorch module for performing SGD training. + + + +### method `__init__` + +```python +__init__(learning_rate=0.1) +``` + +______________________________________________________________________ + + + +### method `forward` + +```python +forward(x, y, weights, bias) +``` + +Forward function for matrix multiplication. + +**Args:** + +- `x` (torch.Tensor): The training data tensor. +- `y` (torch.Tensor): The target tensor. +- `weights` (torch.Tensor): The weights to be learned. +- `bias` (torch.Tensor): The bias to be learned. + +**Returns:** + +- `torch.Tensor`: The updated weights after performing a training step. + +______________________________________________________________________ + + + +### method `predict` + +```python +predict(x, weights, bias) +``` + +Predicts based on weights and bias as inputs. + +**Args:** + +- `x` (torch.Tensor): Input data tensor. +- `weights` (torch.Tensor): Weights tensor. +- `bias` (torch.Tensor): Bias tensor. + +**Returns:** + +- `torch.Tensor`: The predicted outputs for the given inputs. + +______________________________________________________________________ + + + +## class `AddNet` + +Torch model that performs a simple addition between two inputs. + + + +### method `__init__` + +```python +__init__(use_conv, use_qat, input_output, n_bits) +``` + +______________________________________________________________________ + + + +### method `forward` + +```python +forward(x, y) +``` + +Forward pass. + +**Args:** + +- `x`: First input tensor. +- `y`: Second input tensor. + +**Returns:** +Result of adding x and y. + +______________________________________________________________________ + + + +## class `ExpandModel` + +Minimalist network that expands the input tensor to a larger size. + + + +### method `__init__` + +```python +__init__(is_qat) +``` + +______________________________________________________________________ + + + +### method `forward` + +```python +forward(x) +``` + +Expand the input tensor to the target size. + +**Args:** + +- `x` (torch.Tensor): Input tensor. + +**Returns:** + +- `torch.Tensor`: Expanded tensor. diff --git a/docs/developer-guide/api/concrete.ml.pytest.utils.md b/docs/developer-guide/api/concrete.ml.pytest.utils.md index e0222ef76..ed78f3104 100644 --- a/docs/developer-guide/api/concrete.ml.pytest.utils.md +++ b/docs/developer-guide/api/concrete.ml.pytest.utils.md @@ -286,3 +286,28 @@ This function serializes all objects using the `dump`, `dumps`, `load` and `load - `expected_type` (Type): The object's expected type. - `equal_method` (Optional\[Callable\]): The function to use to compare the two loaded objects. Default to `values_are_equal`. - `check_str` (bool): If the JSON strings should also be checked. Default to True. + +______________________________________________________________________ + + + +## function `get_random_samples` + +```python +get_random_samples(x: ndarray, n_sample: int) → ndarray +``` + +Select `n_sample` random elements from a 2D NumPy array. + +**Args:** + +- `x` (numpy.ndarray): The 2D NumPy array from which random rows will be selected. +- `n_sample` (int): The number of rows to randomly select. + +**Returns:** + +- `numpy.ndarray`: A new 2D NumPy array containing the randomly selected rows. + +**Raises:** + +- `AssertionError`: If `n_sample` is not within the range (0, x.shape\[0\]) or if `x` is not a 2D array. diff --git a/docs/developer-guide/api/concrete.ml.quantization.base_quantized_op.md b/docs/developer-guide/api/concrete.ml.quantization.base_quantized_op.md index 23ab81597..b82818135 100644 --- a/docs/developer-guide/api/concrete.ml.quantization.base_quantized_op.md +++ b/docs/developer-guide/api/concrete.ml.quantization.base_quantized_op.md @@ -613,7 +613,11 @@ ______________________________________________________________________ ### method `cnp_round` ```python -cnp_round(x: Union[ndarray, Tracer], calibrate_rounding: bool) → ndarray +cnp_round( + x: Union[ndarray, Tracer], + calibrate_rounding: bool, + rounding_operation_id: Optional[str] = 'single_rounding_op' +) → ndarray ``` Round the input array to the specified number of bits. @@ -621,7 +625,8 @@ Round the input array to the specified number of bits. **Args:** - `x` (Union\[numpy.ndarray, fhe.tracing.Tracer\]): The input array to be rounded. -- `calibrate_rounding` (bool): Whether to calibrate the rounding (compute the lsbs_to_remove) +- `calibrate_rounding` (bool): Whether to calibrate the rounding (compute the lsbs_to_remove). +- `rounding_operation_id` (Optional\[str\]): The identifier for a specific rounding operation in a quantized operation. Used to create and access the lsbs_to_remove value in the dictionary. Defaults to "single_rounding_op" if not provided. **Returns:** diff --git a/docs/developer-guide/api/concrete.ml.quantization.post_training.md b/docs/developer-guide/api/concrete.ml.quantization.post_training.md index 8dbee4aae..12d91cb17 100644 --- a/docs/developer-guide/api/concrete.ml.quantization.post_training.md +++ b/docs/developer-guide/api/concrete.ml.quantization.post_training.md @@ -108,7 +108,7 @@ Get the number of bits to use for the quantization of any constants (usually wei ______________________________________________________________________ - + ### method `quantize_module` @@ -130,7 +130,7 @@ Following https://arxiv.org/abs/1712.05877 guidelines. ______________________________________________________________________ - + ## class `PostTrainingAffineQuantization` @@ -207,7 +207,7 @@ Get the number of bits to use for the quantization of any constants (usually wei ______________________________________________________________________ - + ### method `quantize_module` @@ -229,7 +229,7 @@ Following https://arxiv.org/abs/1712.05877 guidelines. ______________________________________________________________________ - + ## class `PostTrainingQATImporter` @@ -291,7 +291,7 @@ Get the number of bits to use for the quantization of any constants (usually wei ______________________________________________________________________ - + ### method `quantize_module` diff --git a/docs/developer-guide/api/concrete.ml.quantization.quantized_module.md b/docs/developer-guide/api/concrete.ml.quantization.quantized_module.md index 74920a642..881e636db 100644 --- a/docs/developer-guide/api/concrete.ml.quantization.quantized_module.md +++ b/docs/developer-guide/api/concrete.ml.quantization.quantized_module.md @@ -26,10 +26,10 @@ Inference for a quantized model. ```python __init__( - ordered_module_input_names: Iterable[str] = None, - ordered_module_output_names: Iterable[str] = None, - quant_layers_dict: Dict[str, Tuple[Tuple[str, ], QuantizedOp]] = None, - onnx_model: ModelProto = None + ordered_module_input_names: Optional[Iterable[str]] = None, + ordered_module_output_names: Optional[Iterable[str]] = None, + quant_layers_dict: Optional[Dict[str, Tuple[Tuple[str, ], QuantizedOp]]] = None, + onnx_model: Optional[ModelProto] = None ) ``` @@ -67,12 +67,12 @@ Get the post-processing parameters. ______________________________________________________________________ - + ### method `bitwidth_and_range_report` ```python -bitwidth_and_range_report() → Union[Dict[str, Dict[str, Union[Tuple[int, ], int]]], NoneType] +bitwidth_and_range_report() → Optional[Dict[str, Dict[str, Union[Tuple[int, ], int]]]] ``` Report the ranges and bit-widths for layers that mix encrypted integer values. @@ -83,7 +83,7 @@ Report the ranges and bit-widths for layers that mix encrypted integer values. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -99,7 +99,7 @@ Check if the quantized module is compiled. ______________________________________________________________________ - + ### method `compile` @@ -139,27 +139,27 @@ Compile the module's forward function. ______________________________________________________________________ - + ### method `dequantize_output` ```python -dequantize_output(q_y_preds: ndarray) → ndarray +dequantize_output(*q_y_preds: ndarray) → Union[ndarray, Tuple[ndarray, ]] ``` Take the last layer q_out and use its de-quant function. **Args:** -- `q_y_preds` (numpy.ndarray): Quantized output values of the last layer. +- `q_y_preds` (numpy.ndarray): Quantized outputs values. **Returns:** -- `numpy.ndarray`: De-quantized output values of the last layer. +- `Union[numpy.ndarray, Tuple[numpy.ndarray, ...]]`: De-quantized output values of the last layer. ______________________________________________________________________ - + ### method `dump` @@ -175,7 +175,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -191,7 +191,7 @@ Dump itself to a dict. ______________________________________________________________________ - + ### method `dumps` @@ -207,7 +207,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `forward` @@ -216,7 +216,7 @@ forward( *x: ndarray, fhe: Union[FheMode, str] = , debug: bool = False -) → Union[ndarray, Tuple[ndarray, Union[Dict[Any, Any], NoneType]]] +) → Union[ndarray, Tuple[ndarray, ], Tuple[Union[Tuple[ndarray, ], ndarray], Dict[str, Dict[Union[int, str], Union[ndarray, QuantizedArray, NoneType]]]]] ``` Forward pass with numpy function only on floating points. @@ -235,7 +235,7 @@ This method executes the forward pass in the clear, with simulation or in FHE. I ______________________________________________________________________ - + ### method `load_dict` @@ -255,7 +255,7 @@ Load itself from a string. ______________________________________________________________________ - + ### method `post_processing` @@ -277,7 +277,7 @@ For quantized modules, there is no post-processing step but the method is kept t ______________________________________________________________________ - + ### method `quantize_input` @@ -297,7 +297,7 @@ Take the inputs in fp32 and quantize it using the learned quantization parameter ______________________________________________________________________ - + ### method `quantized_forward` @@ -305,7 +305,7 @@ ______________________________________________________________________ quantized_forward( *q_x: ndarray, fhe: Union[FheMode, str] = -) → ndarray +) → Union[Tuple[ndarray, ], ndarray] ``` Forward function for the FHE circuit. @@ -317,11 +317,11 @@ Forward function for the FHE circuit. **Returns:** -- `(numpy.ndarray)`: Predictions of the quantized model, with integer values. +- `(Union[numpy.ndarray, Tuple[numpy.ndarray, ...]])`: Predictions of the quantized model, with integer values. ______________________________________________________________________ - + ### method `set_inputs_quantization_parameters` diff --git a/docs/developer-guide/api/concrete.ml.quantization.quantized_module_passes.md b/docs/developer-guide/api/concrete.ml.quantization.quantized_module_passes.md index 205e213a6..56b77818a 100644 --- a/docs/developer-guide/api/concrete.ml.quantization.quantized_module_passes.md +++ b/docs/developer-guide/api/concrete.ml.quantization.quantized_module_passes.md @@ -41,7 +41,7 @@ ______________________________________________________________________ ### method `compute_op_predecessors` ```python -compute_op_predecessors() → DefaultDict[Union[QuantizedOp, NoneType], List[Tuple[Union[QuantizedOp, NoneType], str]]] +compute_op_predecessors() → DefaultDict[Optional[QuantizedOp], List[Tuple[Optional[QuantizedOp], str]]] ``` Compute the predecessors for each QuantizedOp in a QuantizedModule. @@ -61,7 +61,7 @@ ______________________________________________________________________ ```python detect_patterns( predecessors: DefaultDict[Optional[QuantizedOp], List[Tuple[Optional[QuantizedOp], str]]] -) → Dict[QuantizedMixingOp, Tuple[List[Union[QuantizedOp, NoneType]], Union[QuantizedOp, NoneType]]] +) → Dict[QuantizedMixingOp, Tuple[List[Optional[QuantizedOp]], Optional[QuantizedOp]]] ``` Detect the patterns that can be optimized with roundPBS in the QuantizedModule. @@ -107,7 +107,7 @@ ______________________________________________________________________ ### method `process` ```python -process() → Dict[QuantizedMixingOp, Tuple[List[Union[QuantizedOp, NoneType]], Union[QuantizedOp, NoneType]]] +process() → Dict[QuantizedMixingOp, Tuple[List[Optional[QuantizedOp]], Optional[QuantizedOp]]] ``` Analyze an ONNX graph and detect Gemm/Conv patterns that can use RoundPBS. @@ -129,7 +129,7 @@ ______________________________________________________________________ ```python process_patterns( valid_paths: Dict[QuantizedMixingOp, Tuple[List[Optional[QuantizedOp]], Optional[QuantizedOp]]] -) → Dict[QuantizedMixingOp, Tuple[List[Union[QuantizedOp, NoneType]], Union[QuantizedOp, NoneType]]] +) → Dict[QuantizedMixingOp, Tuple[List[Optional[QuantizedOp]], Optional[QuantizedOp]]] ``` Configure the rounding bits of roundPBS for the optimizable operations. diff --git a/docs/developer-guide/api/concrete.ml.quantization.quantized_ops.md b/docs/developer-guide/api/concrete.ml.quantization.quantized_ops.md index 5f9031e49..4a40e1798 100644 --- a/docs/developer-guide/api/concrete.ml.quantization.quantized_ops.md +++ b/docs/developer-guide/api/concrete.ml.quantization.quantized_ops.md @@ -8,7 +8,7 @@ Quantized versions of the ONNX operators for post training quantization. ______________________________________________________________________ - + ## class `QuantizedSigmoid` @@ -26,7 +26,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedHardSigmoid` @@ -44,7 +44,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedRelu` @@ -62,7 +62,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedPRelu` @@ -80,7 +80,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedLeakyRelu` @@ -98,7 +98,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedHardSwish` @@ -116,7 +116,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedElu` @@ -134,7 +134,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedSelu` @@ -152,7 +152,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedCelu` @@ -170,7 +170,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedClip` @@ -188,7 +188,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedRound` @@ -206,7 +206,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedPow` @@ -226,13 +226,13 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedGemm` Quantized Gemm op. - + ### method `__init__` @@ -240,9 +240,9 @@ Quantized Gemm op. __init__( n_bits_output: int, op_instance_name: str, - int_input_names: Set[str] = None, + int_input_names: Optional[Set[str]] = None, constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None, - input_quant_opts: QuantizationOptions = None, + input_quant_opts: Optional[QuantizationOptions] = None, **attrs ) → None ``` @@ -259,7 +259,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `q_impl` @@ -273,13 +273,13 @@ q_impl( ______________________________________________________________________ - + ## class `QuantizedMatMul` Quantized MatMul op. - + ### method `__init__` @@ -287,9 +287,9 @@ Quantized MatMul op. __init__( n_bits_output: int, op_instance_name: str, - int_input_names: Set[str] = None, + int_input_names: Optional[Set[str]] = None, constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None, - input_quant_opts: QuantizationOptions = None, + input_quant_opts: Optional[QuantizationOptions] = None, **attrs ) → None ``` @@ -306,7 +306,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `q_impl` @@ -320,7 +320,7 @@ q_impl( ______________________________________________________________________ - + ## class `QuantizedAdd` @@ -340,7 +340,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -358,7 +358,7 @@ Add operation can be computed in float and fused if it operates over inputs prod ______________________________________________________________________ - + ### method `q_impl` @@ -371,7 +371,7 @@ q_impl( ______________________________________________________________________ - + ## class `QuantizedTanh` @@ -389,7 +389,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedSoftplus` @@ -407,7 +407,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedExp` @@ -425,7 +425,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedLog` @@ -443,7 +443,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedAbs` @@ -461,7 +461,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedIdentity` @@ -479,7 +479,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `q_impl` @@ -492,7 +492,7 @@ q_impl( ______________________________________________________________________ - + ## class `QuantizedReshape` @@ -510,7 +510,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -528,7 +528,7 @@ Max Pooling operation can not be fused since it must be performed over integer t ______________________________________________________________________ - + ### method `q_impl` @@ -552,13 +552,13 @@ Reshape the input integer encrypted tensor. ______________________________________________________________________ - + ## class `QuantizedConv` Quantized Conv op. - + ### method `__init__` @@ -601,7 +601,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `q_impl` @@ -632,13 +632,13 @@ Allows an optional quantized bias. ______________________________________________________________________ - + ## class `QuantizedAvgPool` Quantized Average Pooling op. - + ### method `__init__` @@ -665,7 +665,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `q_impl` @@ -679,13 +679,13 @@ q_impl( ______________________________________________________________________ - + ## class `QuantizedMaxPool` Quantized Max Pooling op. - + ### method `__init__` @@ -712,7 +712,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -730,7 +730,7 @@ Max Pooling operation can not be fused since it must be performed over integer t ______________________________________________________________________ - + ### method `q_impl` @@ -743,13 +743,13 @@ q_impl( ______________________________________________________________________ - + ## class `QuantizedPad` Quantized Padding op. - + ### method `__init__` @@ -776,7 +776,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -794,7 +794,7 @@ Pad operation cannot be fused since it must be performed over integer tensors. ______________________________________________________________________ - + ### method `q_impl` @@ -808,7 +808,7 @@ q_impl( ______________________________________________________________________ - + ## class `QuantizedWhere` @@ -816,7 +816,7 @@ Where operator on quantized arrays. Supports only constants for the results produced on the True/False branches. - + ### method `__init__` @@ -843,7 +843,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedCast` @@ -863,7 +863,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedGreater` @@ -871,7 +871,7 @@ Comparison operator >. Only supports comparison with a constant. - + ### method `__init__` @@ -898,7 +898,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedGreaterOrEqual` @@ -906,7 +906,7 @@ Comparison operator >=. Only supports comparison with a constant. - + ### method `__init__` @@ -933,7 +933,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedLess` @@ -941,7 +941,7 @@ Comparison operator \<. Only supports comparison with a constant. - + ### method `__init__` @@ -968,7 +968,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedLessOrEqual` @@ -976,7 +976,7 @@ Comparison operator \<=. Only supports comparison with a constant. - + ### method `__init__` @@ -1003,7 +1003,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedOr` @@ -1023,7 +1023,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedDiv` @@ -1043,7 +1043,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedMul` @@ -1063,7 +1063,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedSub` @@ -1083,7 +1083,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -1101,7 +1101,7 @@ Add operation can be computed in float and fused if it operates over inputs prod ______________________________________________________________________ - + ### method `q_impl` @@ -1114,7 +1114,7 @@ q_impl( ______________________________________________________________________ - + ## class `QuantizedBatchNormalization` @@ -1132,7 +1132,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedFlatten` @@ -1150,7 +1150,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -1168,7 +1168,7 @@ Flatten operation cannot be fused since it must be performed over integer tensor ______________________________________________________________________ - + ### method `q_impl` @@ -1192,13 +1192,13 @@ Flatten the input integer encrypted tensor. ______________________________________________________________________ - + ## class `QuantizedReduceSum` ReduceSum with encrypted input. - + ### method `__init__` @@ -1239,7 +1239,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `calibrate` @@ -1259,7 +1259,7 @@ Create corresponding QuantizedArray for the output of the activation function. ______________________________________________________________________ - + ### method `q_impl` @@ -1283,7 +1283,7 @@ Sum the encrypted tensor's values along the given axes. ______________________________________________________________________ - + ## class `QuantizedErf` @@ -1301,7 +1301,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedNot` @@ -1319,13 +1319,13 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedBrevitasQuant` Brevitas uniform quantization with encrypted input. - + ### method `__init__` @@ -1368,7 +1368,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `calibrate` @@ -1388,7 +1388,7 @@ Create corresponding QuantizedArray for the output of Quantization function. ______________________________________________________________________ - + ### method `q_impl` @@ -1412,7 +1412,7 @@ Quantize values. ______________________________________________________________________ - + ## class `QuantizedTranspose` @@ -1432,7 +1432,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -1450,7 +1450,7 @@ Transpose can not be fused since it must be performed over integer tensors as it ______________________________________________________________________ - + ### method `q_impl` @@ -1474,7 +1474,7 @@ Transpose the input integer encrypted tensor. ______________________________________________________________________ - + ## class `QuantizedFloor` @@ -1492,7 +1492,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedMax` @@ -1510,7 +1510,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedMin` @@ -1528,7 +1528,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedNeg` @@ -1546,7 +1546,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedSign` @@ -1564,7 +1564,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ## class `QuantizedUnsqueeze` @@ -1582,7 +1582,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -1600,7 +1600,7 @@ Unsqueeze can not be fused since it must be performed over integer tensors as it ______________________________________________________________________ - + ### method `q_impl` @@ -1624,7 +1624,7 @@ Unsqueeze the input tensors on a given axis. ______________________________________________________________________ - + ## class `QuantizedConcat` @@ -1642,7 +1642,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -1660,7 +1660,7 @@ Concatenation can not be fused since it must be performed over integer tensors a ______________________________________________________________________ - + ### method `q_impl` @@ -1684,7 +1684,7 @@ Concatenate the input tensors on a given axis. ______________________________________________________________________ - + ## class `QuantizedSqueeze` @@ -1702,7 +1702,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -1720,7 +1720,7 @@ Squeeze can not be fused since it must be performed over integer tensors as it r ______________________________________________________________________ - + ### method `q_impl` @@ -1744,7 +1744,7 @@ Squeeze the input tensors on a given axis. ______________________________________________________________________ - + ## class `ONNXShape` @@ -1762,7 +1762,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -1780,7 +1780,7 @@ This operation returns the shape of the tensor and thus can not be fused into a ______________________________________________________________________ - + ### method `q_impl` @@ -1793,7 +1793,7 @@ q_impl( ______________________________________________________________________ - + ## class `ONNXConstantOfShape` @@ -1811,7 +1811,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -1829,7 +1829,7 @@ This operation returns a new encrypted tensor and thus can not be fused. ______________________________________________________________________ - + ## class `ONNXGather` @@ -1849,7 +1849,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -1867,7 +1867,7 @@ This operation returns values from a tensor and thus can not be fused into a uni ______________________________________________________________________ - + ### method `q_impl` @@ -1880,7 +1880,7 @@ q_impl( ______________________________________________________________________ - + ## class `ONNXSlice` @@ -1898,7 +1898,7 @@ Get the names of encrypted integer tensors that are used by this op. ______________________________________________________________________ - + ### method `can_fuse` @@ -1916,7 +1916,7 @@ This operation returns values from a tensor and thus can not be fused into a uni ______________________________________________________________________ - + ### method `q_impl` @@ -1926,3 +1926,98 @@ q_impl( **attrs ) → Union[ndarray, QuantizedArray, NoneType] ``` + +______________________________________________________________________ + + + +## class `QuantizedExpand` + +Expand operator for quantized tensors. + +______________________________________________________________________ + +#### property int_input_names + +Get the names of encrypted integer tensors that are used by this op. + +**Returns:** + +- `Set[str]`: the names of the tensors + +______________________________________________________________________ + + + +### method `can_fuse` + +```python +can_fuse() → bool +``` + +Determine if this op can be fused. + +Unsqueeze can not be fused since it must be performed over integer tensors as it reshapes an encrypted tensor. + +**Returns:** + +- `bool`: False, this operation can not be fused as it operates on encrypted tensors + +______________________________________________________________________ + + + +### method `q_impl` + +```python +q_impl( + *q_inputs: Optional[ndarray, QuantizedArray], + **attrs +) → Union[ndarray, QuantizedArray, NoneType] +``` + +Expand the input tensor to a specified shape. + +**Args:** + +- `q_inputs`: an encrypted integer tensor at index 0, shape at index 1 +- `attrs`: additional optional expand options + +**Returns:** + +- `result` (QuantizedArray): expanded encrypted integer tensor + +______________________________________________________________________ + + + +## class `QuantizedEqual` + +Comparison operator ==. + +Only supports comparison with a constant. + + + +### method `__init__` + +```python +__init__( + n_bits_output: int, + op_instance_name: str, + int_input_names: Set[str] = None, + constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None, + input_quant_opts: QuantizationOptions = None, + **attrs +) → None +``` + +______________________________________________________________________ + +#### property int_input_names + +Get the names of encrypted integer tensors that are used by this op. + +**Returns:** + +- `Set[str]`: the names of the tensors diff --git a/docs/developer-guide/api/concrete.ml.quantization.quantizers.md b/docs/developer-guide/api/concrete.ml.quantization.quantizers.md index ab728b96c..367af67b7 100644 --- a/docs/developer-guide/api/concrete.ml.quantization.quantizers.md +++ b/docs/developer-guide/api/concrete.ml.quantization.quantizers.md @@ -13,7 +13,7 @@ Quantization utilities for a numpy array/tensor. ______________________________________________________________________ - + ## function `fill_from_kwargs` @@ -40,7 +40,7 @@ Fill a parameter set structure from kwargs parameters. ______________________________________________________________________ - + ## class `QuantizationOptions` @@ -48,7 +48,7 @@ Options for quantization. Determines the number of bits for quantization and the method of quantization of the values. Signed quantization allows negative quantized values. Symmetric quantization assumes the float values are distributed symmetrically around x=0 and assigns signed values around 0 to the float values. QAT (quantization aware training) quantization assumes the values are already quantized, taking a discrete set of values, and assigns these values to integers, computing only the scale. - + ### method `__init__` @@ -73,7 +73,7 @@ Get a copy of the quantization parameters. ______________________________________________________________________ - + ### method `copy_opts` @@ -89,7 +89,7 @@ Copy the options from a different structure. ______________________________________________________________________ - + ### method `dump` @@ -105,7 +105,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -121,7 +121,7 @@ Dump itself to a dict. ______________________________________________________________________ - + ### method `dumps` @@ -137,7 +137,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `is_equal` @@ -158,7 +158,7 @@ Compare two quantization options sets. ______________________________________________________________________ - + ### method `load_dict` @@ -178,7 +178,7 @@ Load itself from a string. ______________________________________________________________________ - + ## class `MinMaxQuantizationStats` @@ -186,7 +186,7 @@ Calibration set statistics. This class stores the statistics for the calibration set or for a calibration data batch. Currently we only store min/max to determine the quantization range. The min/max are computed from the calibration set. - + ### method `__init__` @@ -210,7 +210,7 @@ Get a copy of the calibration set statistics. ______________________________________________________________________ - + ### method `check_is_uniform_quantized` @@ -232,7 +232,7 @@ Determines whether the values represented by this QuantizedArray show a quantize ______________________________________________________________________ - + ### method `compute_quantization_stats` @@ -248,7 +248,7 @@ Compute the calibration set quantization statistics. ______________________________________________________________________ - + ### method `copy_stats` @@ -264,7 +264,7 @@ Copy the statistics from a different structure. ______________________________________________________________________ - + ### method `dump` @@ -280,7 +280,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -296,7 +296,7 @@ Dump itself to a dict. ______________________________________________________________________ - + ### method `dumps` @@ -312,7 +312,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `load_dict` @@ -332,7 +332,7 @@ Load itself from a string. ______________________________________________________________________ - + ## class `UniformQuantizationParameters` @@ -340,7 +340,7 @@ Quantization parameters for uniform quantization. This class stores the parameters used for quantizing real values to discrete integer values. The parameters are computed from quantization options and quantization statistics. - + ### method `__init__` @@ -364,7 +364,7 @@ Get a copy of the quantization parameters. ______________________________________________________________________ - + ### method `compute_quantization_parameters` @@ -384,7 +384,7 @@ Compute the quantization parameters. ______________________________________________________________________ - + ### method `copy_params` @@ -400,7 +400,7 @@ Copy the parameters from a different structure. ______________________________________________________________________ - + ### method `dump` @@ -416,7 +416,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -432,7 +432,7 @@ Dump itself to a dict. ______________________________________________________________________ - + ### method `dumps` @@ -448,7 +448,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `load_dict` @@ -468,7 +468,7 @@ Load itself from a string. ______________________________________________________________________ - + ## class `UniformQuantizer` @@ -482,7 +482,7 @@ Contains all information necessary for uniform quantization and provides quantiz - `stats` (Optional\[MinMaxQuantizationStats\]): Quantization batch statistics set - `params` (Optional\[UniformQuantizationParameters\]): Quantization parameters set (scale, zero-point) - + ### method `__init__` @@ -527,7 +527,7 @@ Get a copy of the calibration set statistics. ______________________________________________________________________ - + ### method `check_is_uniform_quantized` @@ -549,7 +549,7 @@ Determines whether the values represented by this QuantizedArray show a quantize ______________________________________________________________________ - + ### method `compute_quantization_parameters` @@ -569,7 +569,7 @@ Compute the quantization parameters. ______________________________________________________________________ - + ### method `compute_quantization_stats` @@ -585,7 +585,7 @@ Compute the calibration set quantization statistics. ______________________________________________________________________ - + ### method `copy_opts` @@ -601,7 +601,7 @@ Copy the options from a different structure. ______________________________________________________________________ - + ### method `copy_params` @@ -617,7 +617,7 @@ Copy the parameters from a different structure. ______________________________________________________________________ - + ### method `copy_stats` @@ -633,12 +633,12 @@ Copy the statistics from a different structure. ______________________________________________________________________ - + ### method `dequant` ```python -dequant(qvalues: 'ndarray') → Union[Any, ndarray] +dequant(qvalues: 'ndarray') → Union[ndarray, Tracer] ``` De-quantize values. @@ -649,11 +649,11 @@ De-quantize values. **Returns:** -- `Union[Any, numpy.ndarray]`: De-quantized float values. +- `Union[numpy.ndarray, Tracer]`: De-quantized float values. ______________________________________________________________________ - + ### method `dump` @@ -669,7 +669,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -685,7 +685,7 @@ Dump itself to a dict. ______________________________________________________________________ - + ### method `dumps` @@ -701,7 +701,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `is_equal` @@ -722,7 +722,7 @@ Compare two quantization options sets. ______________________________________________________________________ - + ### method `load_dict` @@ -742,7 +742,7 @@ Load itself from a string. ______________________________________________________________________ - + ### method `quant` @@ -762,7 +762,7 @@ Quantize values. ______________________________________________________________________ - + ## class `QuantizedArray` @@ -782,7 +782,7 @@ See https://arxiv.org/abs/1712.05877. - `params` (Optional\[UniformQuantizationParameters\]): Quantization parameters set (scale, zero-point) - `kwargs`: Any member of the options, stats, params sets as a key-value pair. The parameter sets need to be completely parametrized if their members appear in kwargs. - + ### method `__init__` @@ -800,7 +800,7 @@ __init__( ______________________________________________________________________ - + ### method `dequant` @@ -816,7 +816,7 @@ De-quantize self.qvalues. ______________________________________________________________________ - + ### method `dump` @@ -832,7 +832,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -848,7 +848,7 @@ Dump itself to a dict. ______________________________________________________________________ - + ### method `dumps` @@ -864,7 +864,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `load_dict` @@ -884,7 +884,7 @@ Load itself from a string. ______________________________________________________________________ - + ### method `quant` @@ -900,7 +900,7 @@ Quantize self.values. ______________________________________________________________________ - + ### method `update_quantized_values` @@ -920,7 +920,7 @@ Update qvalues to get their corresponding values using the related quantized par ______________________________________________________________________ - + ### method `update_values` diff --git a/docs/developer-guide/api/concrete.ml.sklearn.base.md b/docs/developer-guide/api/concrete.ml.sklearn.base.md index 80ddbfd9b..7caaa9516 100644 --- a/docs/developer-guide/api/concrete.ml.sklearn.base.md +++ b/docs/developer-guide/api/concrete.ml.sklearn.base.md @@ -14,7 +14,7 @@ Base classes for all estimators. ______________________________________________________________________ - + ## class `BaseEstimator` @@ -26,7 +26,7 @@ This class does not inherit from sklearn.base.BaseEstimator as it creates some c - `_is_a_public_cml_model` (bool): Private attribute indicating if the class is a public model (as opposed to base or mixin classes). - + ### method `__init__` @@ -84,7 +84,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -100,7 +100,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -116,7 +116,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -150,7 +150,7 @@ Compile the model. ______________________________________________________________________ - + ### method `dequantize_output` @@ -172,7 +172,7 @@ This step ensures that the fit method has been called. ______________________________________________________________________ - + ### method `dump` @@ -188,7 +188,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -204,7 +204,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -220,7 +220,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -243,7 +243,7 @@ The fitted estimator. ______________________________________________________________________ - + ### method `fit_benchmark` @@ -270,7 +270,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -292,7 +292,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -312,7 +312,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -336,7 +336,7 @@ For some simple models such a linear regression, there is no post-processing ste ______________________________________________________________________ - + ### method `predict` @@ -360,7 +360,7 @@ Predict values for X, in FHE or in the clear. ______________________________________________________________________ - + ### method `quantize_input` @@ -382,7 +382,7 @@ This step ensures that the fit method has been called. ______________________________________________________________________ - + ## class `BaseClassifier` @@ -390,7 +390,7 @@ Base class for linear and tree-based classifiers in Concrete ML. This class inherits from BaseEstimator and modifies some of its methods in order to align them with classifier behaviors. This notably include applying a sigmoid/softmax post-processing to the predicted values as well as handling a mapping of classes in case they are not ordered. - + ### method `__init__` @@ -472,7 +472,7 @@ Using this attribute is deprecated. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -488,7 +488,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -504,7 +504,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -538,7 +538,7 @@ Compile the model. ______________________________________________________________________ - + ### method `dequantize_output` @@ -560,7 +560,7 @@ This step ensures that the fit method has been called. ______________________________________________________________________ - + ### method `dump` @@ -576,7 +576,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -592,7 +592,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -608,7 +608,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -618,7 +618,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -645,7 +645,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -667,7 +667,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -687,7 +687,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -697,7 +697,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -710,7 +710,7 @@ predict( ______________________________________________________________________ - + ### method `predict_proba` @@ -734,7 +734,7 @@ Predict class probabilities. ______________________________________________________________________ - + ### method `quantize_input` @@ -756,13 +756,13 @@ This step ensures that the fit method has been called. ______________________________________________________________________ - + ## class `QuantizedTorchEstimatorMixin` Mixin that provides quantization for a torch module and follows the Estimator API. - + ### method `__init__` @@ -838,7 +838,7 @@ Get the output quantizers. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -854,7 +854,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -870,7 +870,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -888,17 +888,17 @@ compile( ______________________________________________________________________ - + ### method `dequantize_output` ```python -dequantize_output(q_y_preds: 'ndarray') → ndarray +dequantize_output(*q_y_preds: 'ndarray') → ndarray ``` ______________________________________________________________________ - + ### method `dump` @@ -914,7 +914,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -930,7 +930,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -946,7 +946,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -971,7 +971,7 @@ The fitted estimator. ______________________________________________________________________ - + ### method `fit_benchmark` @@ -1002,7 +1002,7 @@ The Concrete ML and equivalent skorch fitted estimators. ______________________________________________________________________ - + ### method `get_params` @@ -1024,7 +1024,7 @@ This method is overloaded in order to make sure that auto-computed parameters ar ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -1034,7 +1034,7 @@ get_sklearn_params(deep: 'bool' = True) → Dict ______________________________________________________________________ - + ### classmethod `load_dict` @@ -1054,7 +1054,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -1064,7 +1064,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -1088,7 +1088,7 @@ Predict values for X, in FHE or in the clear. ______________________________________________________________________ - + ### method `prune` @@ -1116,7 +1116,7 @@ A new pruned copy of the Neural Network model. ______________________________________________________________________ - + ### method `quantize_input` @@ -1126,7 +1126,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `BaseTreeEstimatorMixin` @@ -1134,7 +1134,7 @@ Mixin class for tree-based estimators. This class inherits from sklearn.base.BaseEstimator in order to have access to scikit-learn's `get_params` and `set_params` methods. - + ### method `__init__` @@ -1194,7 +1194,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -1210,7 +1210,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -1226,7 +1226,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -1236,7 +1236,7 @@ compile(*args, **kwargs) → Circuit ______________________________________________________________________ - + ### method `dequantize_output` @@ -1246,7 +1246,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -1262,7 +1262,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -1278,7 +1278,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -1294,7 +1294,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -1304,7 +1304,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -1331,7 +1331,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -1353,7 +1353,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -1373,7 +1373,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -1383,7 +1383,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -1396,7 +1396,7 @@ predict( ______________________________________________________________________ - + ### method `quantize_input` @@ -1406,7 +1406,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `BaseTreeRegressorMixin` @@ -1414,7 +1414,7 @@ Mixin class for tree-based regressors. This class is used to create a tree-based regressor class that inherits from sklearn.base.RegressorMixin, which essentially gives access to scikit-learn's `score` method for regressors. - + ### method `__init__` @@ -1474,7 +1474,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -1490,7 +1490,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -1506,7 +1506,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -1516,7 +1516,7 @@ compile(*args, **kwargs) → Circuit ______________________________________________________________________ - + ### method `dequantize_output` @@ -1526,7 +1526,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -1542,7 +1542,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -1558,7 +1558,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -1574,7 +1574,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -1584,7 +1584,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -1611,7 +1611,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -1633,7 +1633,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -1653,7 +1653,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -1663,7 +1663,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -1676,7 +1676,7 @@ predict( ______________________________________________________________________ - + ### method `quantize_input` @@ -1686,7 +1686,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `BaseTreeClassifierMixin` @@ -1696,7 +1696,7 @@ This class is used to create a tree-based classifier class that inherits from sk Additionally, this class adjusts some of the tree-based base class's methods in order to make them compliant with classification workflows. - + ### method `__init__` @@ -1780,7 +1780,7 @@ Using this attribute is deprecated. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -1796,7 +1796,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -1812,7 +1812,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -1822,7 +1822,7 @@ compile(*args, **kwargs) → Circuit ______________________________________________________________________ - + ### method `dequantize_output` @@ -1832,7 +1832,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -1848,7 +1848,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -1864,7 +1864,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -1880,7 +1880,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -1890,7 +1890,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -1917,7 +1917,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -1939,7 +1939,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -1959,7 +1959,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -1969,7 +1969,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -1982,7 +1982,7 @@ predict( ______________________________________________________________________ - + ### method `predict_proba` @@ -2006,7 +2006,7 @@ Predict class probabilities. ______________________________________________________________________ - + ### method `quantize_input` @@ -2016,7 +2016,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `SklearnLinearModelMixin` @@ -2024,7 +2024,7 @@ A Mixin class for sklearn linear models with FHE. This class inherits from sklearn.base.BaseEstimator in order to have access to scikit-learn's `get_params` and `set_params` methods. - + ### method `__init__` @@ -2086,7 +2086,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -2102,7 +2102,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -2118,7 +2118,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -2152,7 +2152,7 @@ Compile the model. ______________________________________________________________________ - + ### method `dequantize_output` @@ -2162,7 +2162,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -2178,7 +2178,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -2194,7 +2194,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -2210,7 +2210,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -2220,7 +2220,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -2247,7 +2247,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### classmethod `from_sklearn_model` @@ -2274,7 +2274,7 @@ The FHE-compliant fitted model. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -2296,7 +2296,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -2316,7 +2316,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -2340,7 +2340,7 @@ For some simple models such a linear regression, there is no post-processing ste ______________________________________________________________________ - + ### method `predict` @@ -2364,7 +2364,7 @@ Predict values for X, in FHE or in the clear. ______________________________________________________________________ - + ### method `quantize_input` @@ -2374,7 +2374,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `SklearnLinearRegressorMixin` @@ -2382,7 +2382,7 @@ A Mixin class for sklearn linear regressors with FHE. This class is used to create a linear regressor class that inherits from sklearn.base.RegressorMixin, which essentially gives access to scikit-learn's `score` method for regressors. - + ### method `__init__` @@ -2444,7 +2444,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -2460,7 +2460,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -2476,7 +2476,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -2510,7 +2510,7 @@ Compile the model. ______________________________________________________________________ - + ### method `dequantize_output` @@ -2520,7 +2520,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -2536,7 +2536,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -2552,7 +2552,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -2568,7 +2568,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -2578,7 +2578,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -2605,7 +2605,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### classmethod `from_sklearn_model` @@ -2632,7 +2632,7 @@ The FHE-compliant fitted model. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -2654,7 +2654,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -2674,7 +2674,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -2698,7 +2698,7 @@ For some simple models such a linear regression, there is no post-processing ste ______________________________________________________________________ - + ### method `predict` @@ -2722,7 +2722,7 @@ Predict values for X, in FHE or in the clear. ______________________________________________________________________ - + ### method `quantize_input` @@ -2732,7 +2732,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `SklearnSGDRegressorMixin` @@ -2740,7 +2740,7 @@ A Mixin class for sklearn SGD regressors with FHE. This class is used to create a SGD regressor class what can be exported to ONNX using Hummingbird. - + ### method `__init__` @@ -2802,7 +2802,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -2818,7 +2818,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -2834,7 +2834,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -2868,7 +2868,7 @@ Compile the model. ______________________________________________________________________ - + ### method `dequantize_output` @@ -2878,7 +2878,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -2894,7 +2894,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -2910,7 +2910,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -2926,7 +2926,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -2936,7 +2936,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -2963,7 +2963,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### classmethod `from_sklearn_model` @@ -2990,7 +2990,7 @@ The FHE-compliant fitted model. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -3012,7 +3012,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -3032,7 +3032,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -3056,7 +3056,7 @@ For some simple models such a linear regression, there is no post-processing ste ______________________________________________________________________ - + ### method `predict` @@ -3080,7 +3080,7 @@ Predict values for X, in FHE or in the clear. ______________________________________________________________________ - + ### method `quantize_input` @@ -3090,7 +3090,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `SklearnLinearClassifierMixin` @@ -3100,7 +3100,7 @@ This class is used to create a linear classifier class that inherits from sklear Additionally, this class adjusts some of the tree-based base class's methods in order to make them compliant with classification workflows. - + ### method `__init__` @@ -3186,7 +3186,7 @@ Using this attribute is deprecated. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -3202,7 +3202,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -3218,7 +3218,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -3252,7 +3252,7 @@ Compile the model. ______________________________________________________________________ - + ### method `decision_function` @@ -3276,7 +3276,7 @@ Predict confidence scores. ______________________________________________________________________ - + ### method `dequantize_output` @@ -3286,7 +3286,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -3302,7 +3302,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -3318,7 +3318,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -3334,7 +3334,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -3344,7 +3344,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -3371,7 +3371,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### classmethod `from_sklearn_model` @@ -3398,7 +3398,7 @@ The FHE-compliant fitted model. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -3420,7 +3420,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### classmethod `load_dict` @@ -3440,7 +3440,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `post_processing` @@ -3450,7 +3450,7 @@ post_processing(y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `predict` @@ -3463,7 +3463,7 @@ predict( ______________________________________________________________________ - + ### method `predict_proba` @@ -3476,7 +3476,7 @@ predict_proba( ______________________________________________________________________ - + ### method `quantize_input` @@ -3486,7 +3486,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `SklearnKNeighborsMixin` @@ -3494,7 +3494,7 @@ A Mixin class for sklearn KNeighbors models with FHE. This class inherits from sklearn.base.BaseEstimator in order to have access to scikit-learn's `get_params` and `set_params` methods. - + ### method `__init__` @@ -3554,7 +3554,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -3570,7 +3570,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -3586,7 +3586,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -3620,7 +3620,7 @@ Compile the model. ______________________________________________________________________ - + ### method `dequantize_output` @@ -3630,7 +3630,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -3646,7 +3646,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -3662,7 +3662,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -3678,7 +3678,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -3688,7 +3688,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -3715,7 +3715,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -3737,7 +3737,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### method `get_topk_labels` @@ -3761,7 +3761,7 @@ Return the K-nearest labels of each point. ______________________________________________________________________ - + ### classmethod `load_dict` @@ -3781,7 +3781,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `majority_vote` @@ -3801,7 +3801,7 @@ Determine the most common class among nearest neighborsfor each query. ______________________________________________________________________ - + ### method `post_processing` @@ -3823,7 +3823,7 @@ For KNN, the de-quantization step is not required. Because \_inference returns t ______________________________________________________________________ - + ### method `predict` @@ -3836,7 +3836,7 @@ predict( ______________________________________________________________________ - + ### method `quantize_input` @@ -3846,7 +3846,7 @@ quantize_input(X: 'ndarray') → ndarray ______________________________________________________________________ - + ## class `SklearnKNeighborsClassifierMixin` @@ -3854,7 +3854,7 @@ A Mixin class for sklearn KNeighbors classifiers with FHE. This class is used to create a KNeighbors classifier class that inherits from SklearnKNeighborsMixin and sklearn.base.ClassifierMixin. By inheriting from sklearn.base.ClassifierMixin, it allows this class to be recognized as a classifier." - + ### method `__init__` @@ -3914,7 +3914,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `check_model_is_compiled` @@ -3930,7 +3930,7 @@ Check if the model is compiled. ______________________________________________________________________ - + ### method `check_model_is_fitted` @@ -3946,7 +3946,7 @@ Check if the model is fitted. ______________________________________________________________________ - + ### method `compile` @@ -3980,7 +3980,7 @@ Compile the model. ______________________________________________________________________ - + ### method `dequantize_output` @@ -3990,7 +3990,7 @@ dequantize_output(q_y_preds: 'ndarray') → ndarray ______________________________________________________________________ - + ### method `dump` @@ -4006,7 +4006,7 @@ Dump itself to a file. ______________________________________________________________________ - + ### method `dump_dict` @@ -4022,7 +4022,7 @@ Dump the object as a dict. ______________________________________________________________________ - + ### method `dumps` @@ -4038,7 +4038,7 @@ Dump itself to a string. ______________________________________________________________________ - + ### method `fit` @@ -4048,7 +4048,7 @@ fit(X: 'Data', y: 'Target', **fit_parameters) ______________________________________________________________________ - + ### method `fit_benchmark` @@ -4075,7 +4075,7 @@ The Concrete ML and float equivalent fitted estimators. ______________________________________________________________________ - + ### method `get_sklearn_params` @@ -4097,7 +4097,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo ______________________________________________________________________ - + ### method `get_topk_labels` @@ -4121,7 +4121,7 @@ Return the K-nearest labels of each point. ______________________________________________________________________ - + ### classmethod `load_dict` @@ -4141,7 +4141,7 @@ Load itself from a dict. ______________________________________________________________________ - + ### method `majority_vote` @@ -4161,7 +4161,7 @@ Determine the most common class among nearest neighborsfor each query. ______________________________________________________________________ - + ### method `post_processing` @@ -4183,7 +4183,7 @@ For KNN, the de-quantization step is not required. Because \_inference returns t ______________________________________________________________________ - + ### method `predict` @@ -4196,7 +4196,7 @@ predict( ______________________________________________________________________ - + ### method `quantize_input` diff --git a/docs/developer-guide/api/concrete.ml.sklearn.rf.md b/docs/developer-guide/api/concrete.ml.sklearn.rf.md index 47eeaeb9b..67167c5ab 100644 --- a/docs/developer-guide/api/concrete.ml.sklearn.rf.md +++ b/docs/developer-guide/api/concrete.ml.sklearn.rf.md @@ -146,13 +146,13 @@ post_processing(y_preds: ndarray) → ndarray ______________________________________________________________________ - + ## class `RandomForestRegressor` Implements the RandomForest regressor. - + ### method `__init__` @@ -229,7 +229,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `dump_dict` @@ -239,7 +239,7 @@ dump_dict() → Dict[str, Any] ______________________________________________________________________ - + ### classmethod `load_dict` diff --git a/docs/developer-guide/api/concrete.ml.sklearn.tree.md b/docs/developer-guide/api/concrete.ml.sklearn.tree.md index 160f6e127..310eba483 100644 --- a/docs/developer-guide/api/concrete.ml.sklearn.tree.md +++ b/docs/developer-guide/api/concrete.ml.sklearn.tree.md @@ -140,13 +140,13 @@ post_processing(y_preds: ndarray) → ndarray ______________________________________________________________________ - + ## class `DecisionTreeRegressor` Implements the sklearn DecisionTreeClassifier. - + ### method `__init__` @@ -217,7 +217,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `dump_dict` @@ -227,7 +227,7 @@ dump_dict() → Dict[str, Any] ______________________________________________________________________ - + ### classmethod `load_dict` diff --git a/docs/developer-guide/api/concrete.ml.sklearn.tree_to_numpy.md b/docs/developer-guide/api/concrete.ml.sklearn.tree_to_numpy.md index 43982a259..7d93409b0 100644 --- a/docs/developer-guide/api/concrete.ml.sklearn.tree_to_numpy.md +++ b/docs/developer-guide/api/concrete.ml.sklearn.tree_to_numpy.md @@ -10,10 +10,12 @@ Implements the conversion of a tree model to a numpy function. - **MAX_BITWIDTH_BACKWARD_COMPATIBLE** - **OPSET_VERSION_FOR_ONNX_EXPORT** +- **MSB_TO_KEEP_FOR_TREES** +- **MIN_CIRCUIT_THRESHOLD_FOR_TREES** ______________________________________________________________________ - + ## function `get_onnx_model` @@ -36,7 +38,7 @@ Create ONNX model with Hummingbird convert method. ______________________________________________________________________ - + ## function `workaround_squeeze_node_xgboost` @@ -54,7 +56,7 @@ FIXME: https://github.com/zama-ai/concrete-ml-internal/issues/2778 The squeeze o ______________________________________________________________________ - + ## function `assert_add_node_and_constant_in_xgboost_regressor_graph` @@ -70,7 +72,7 @@ Assert if an Add node with a specific constant exists in the ONNX graph. ______________________________________________________________________ - + ## function `add_transpose_after_last_node` @@ -86,7 +88,7 @@ Add transpose after last node. ______________________________________________________________________ - + ## function `preprocess_tree_predictions` @@ -110,7 +112,7 @@ Apply post-processing from the graph. ______________________________________________________________________ - + ## function `tree_onnx_graph_preprocessing` @@ -133,7 +135,7 @@ Apply pre-processing onto the ONNX graph. ______________________________________________________________________ - + ## function `tree_values_preprocessing` @@ -160,7 +162,7 @@ Pre-process tree values. ______________________________________________________________________ - + ## function `tree_to_numpy` @@ -169,6 +171,7 @@ tree_to_numpy( model: Callable, x: ndarray, framework: str, + use_rounding: bool = True, output_n_bits: int = 8 ) → Tuple[Callable, List[UniformQuantizer], ModelProto] ``` @@ -179,6 +182,7 @@ Convert the tree inference to a numpy functions using Hummingbird. - `model` (Callable): The tree model to convert. - `x` (numpy.ndarray): The input data. +- `use_rounding` (bool): This parameter is exclusively used to tree-based models. It determines whether the rounding feature is enabled or disabled. - `framework` (str): The framework from which the ONNX model is generated. - `(options`: 'xgboost', 'sklearn') - `output_n_bits` (int): The number of bits of the output. Default to 8. diff --git a/docs/developer-guide/api/concrete.ml.sklearn.xgb.md b/docs/developer-guide/api/concrete.ml.sklearn.xgb.md index c7322e60c..f2131b536 100644 --- a/docs/developer-guide/api/concrete.ml.sklearn.xgb.md +++ b/docs/developer-guide/api/concrete.ml.sklearn.xgb.md @@ -24,7 +24,7 @@ See https://xgboost.readthedocs.io/en/stable/python/python_api.html#module-xgboo __init__( n_bits: int = 6, max_depth: Optional[int] = 3, - learning_rate: Optional[float] = 0.1, + learning_rate: Optional[float] = None, n_estimators: Optional[int] = 20, objective: Optional[str] = 'binary:logistic', booster: Optional[str] = None, @@ -146,7 +146,7 @@ load_dict(metadata: Dict) ______________________________________________________________________ - + ## class `XGBRegressor` @@ -154,7 +154,7 @@ Implements the XGBoost regressor. See https://xgboost.readthedocs.io/en/stable/python/python_api.html#module-xgboost.sklearn for more information about the parameters used. - + ### method `__init__` @@ -162,7 +162,7 @@ See https://xgboost.readthedocs.io/en/stable/python/python_api.html#module-xgboo __init__( n_bits: int = 6, max_depth: Optional[int] = 3, - learning_rate: Optional[float] = 0.1, + learning_rate: Optional[float] = None, n_estimators: Optional[int] = 20, objective: Optional[str] = 'reg:squarederror', booster: Optional[str] = None, @@ -240,7 +240,7 @@ Is None if the model is not fitted. ______________________________________________________________________ - + ### method `dump_dict` @@ -250,7 +250,7 @@ dump_dict() → Dict[str, Any] ______________________________________________________________________ - + ### method `fit` @@ -260,7 +260,7 @@ fit(X, y, *args, **kwargs) → Any ______________________________________________________________________ - + ### classmethod `load_dict` @@ -270,7 +270,7 @@ load_dict(metadata: Dict) ______________________________________________________________________ - + ### method `post_processing`