diff --git a/docs/README.md b/docs/README.md
index b6dd36885..b7753ace3 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -27,7 +27,7 @@ Learn the basics of Concrete ML, set it up, and make it run with ease.
Start building with Concrete ML by exploring its core features, discovering essential guides, and learning more with user-friendly tutorials.
-
## Explore more
diff --git a/docs/SUMMARY.md b/docs/SUMMARY.md
index 8e65a0a16..e3086003d 100644
--- a/docs/SUMMARY.md
+++ b/docs/SUMMARY.md
@@ -15,7 +15,7 @@
- [Tree-based models](built-in-models/tree.md)
- [Neural networks](built-in-models/neural-networks.md)
- [Nearest neighbors](built-in-models/nearest-neighbors.md)
-- [Pandas](built-in-models/pandas.md)
+- [Encrypted dataframe](built-in-models/encrypted_dataframe.md)
- [Encrypted training](built-in-models/training.md)
## Deep Learning
@@ -41,62 +41,8 @@
## References
-
-
- [API](references/api/README.md)
- - [concrete.ml.common.check_inputs.md](references/api/concrete.ml.common.check_inputs.md)
- - [concrete.ml.common.debugging.custom_assert.md](references/api/concrete.ml.common.debugging.custom_assert.md)
- - [concrete.ml.common.debugging.md](references/api/concrete.ml.common.debugging.md)
- - [concrete.ml.common.md](references/api/concrete.ml.common.md)
- - [concrete.ml.common.serialization.decoder.md](references/api/concrete.ml.common.serialization.decoder.md)
- - [concrete.ml.common.serialization.dumpers.md](references/api/concrete.ml.common.serialization.dumpers.md)
- - [concrete.ml.common.serialization.encoder.md](references/api/concrete.ml.common.serialization.encoder.md)
- - [concrete.ml.common.serialization.loaders.md](references/api/concrete.ml.common.serialization.loaders.md)
- - [concrete.ml.common.serialization.md](references/api/concrete.ml.common.serialization.md)
- - [concrete.ml.common.utils.md](references/api/concrete.ml.common.utils.md)
- - [concrete.ml.deployment.deploy_to_aws.md](references/api/concrete.ml.deployment.deploy_to_aws.md)
- - [concrete.ml.deployment.deploy_to_docker.md](references/api/concrete.ml.deployment.deploy_to_docker.md)
- - [concrete.ml.deployment.fhe_client_server.md](references/api/concrete.ml.deployment.fhe_client_server.md)
- - [concrete.ml.deployment.md](references/api/concrete.ml.deployment.md)
- - [concrete.ml.deployment.server.md](references/api/concrete.ml.deployment.server.md)
- - [concrete.ml.deployment.utils.md](references/api/concrete.ml.deployment.utils.md)
- - [concrete.ml.onnx.convert.md](references/api/concrete.ml.onnx.convert.md)
- - [concrete.ml.onnx.md](references/api/concrete.ml.onnx.md)
- - [concrete.ml.onnx.onnx_impl_utils.md](references/api/concrete.ml.onnx.onnx_impl_utils.md)
- - [concrete.ml.onnx.onnx_model_manipulations.md](references/api/concrete.ml.onnx.onnx_model_manipulations.md)
- - [concrete.ml.onnx.onnx_utils.md](references/api/concrete.ml.onnx.onnx_utils.md)
- - [concrete.ml.onnx.ops_impl.md](references/api/concrete.ml.onnx.ops_impl.md)
- - [concrete.ml.pytest.md](references/api/concrete.ml.pytest.md)
- - [concrete.ml.pytest.torch_models.md](references/api/concrete.ml.pytest.torch_models.md)
- - [concrete.ml.pytest.utils.md](references/api/concrete.ml.pytest.utils.md)
- - [concrete.ml.quantization.base_quantized_op.md](references/api/concrete.ml.quantization.base_quantized_op.md)
- - [concrete.ml.quantization.md](references/api/concrete.ml.quantization.md)
- - [concrete.ml.quantization.post_training.md](references/api/concrete.ml.quantization.post_training.md)
- - [concrete.ml.quantization.quantized_module.md](references/api/concrete.ml.quantization.quantized_module.md)
- - [concrete.ml.quantization.quantized_module_passes.md](references/api/concrete.ml.quantization.quantized_module_passes.md)
- - [concrete.ml.quantization.quantized_ops.md](references/api/concrete.ml.quantization.quantized_ops.md)
- - [concrete.ml.quantization.quantizers.md](references/api/concrete.ml.quantization.quantizers.md)
- - [concrete.ml.search_parameters.md](references/api/concrete.ml.search_parameters.md)
- - [concrete.ml.search_parameters.p_error_search.md](references/api/concrete.ml.search_parameters.p_error_search.md)
- - [concrete.ml.sklearn.base.md](references/api/concrete.ml.sklearn.base.md)
- - [concrete.ml.sklearn.glm.md](references/api/concrete.ml.sklearn.glm.md)
- - [concrete.ml.sklearn.linear_model.md](references/api/concrete.ml.sklearn.linear_model.md)
- - [concrete.ml.sklearn.md](references/api/concrete.ml.sklearn.md)
- - [concrete.ml.sklearn.neighbors.md](references/api/concrete.ml.sklearn.neighbors.md)
- - [concrete.ml.sklearn.qnn.md](references/api/concrete.ml.sklearn.qnn.md)
- - [concrete.ml.sklearn.qnn_module.md](references/api/concrete.ml.sklearn.qnn_module.md)
- - [concrete.ml.sklearn.rf.md](references/api/concrete.ml.sklearn.rf.md)
- - [concrete.ml.sklearn.svm.md](references/api/concrete.ml.sklearn.svm.md)
- - [concrete.ml.sklearn.tree.md](references/api/concrete.ml.sklearn.tree.md)
- - [concrete.ml.sklearn.tree_to_numpy.md](references/api/concrete.ml.sklearn.tree_to_numpy.md)
- - [concrete.ml.sklearn.xgb.md](references/api/concrete.ml.sklearn.xgb.md)
- - [concrete.ml.torch.compile.md](references/api/concrete.ml.torch.compile.md)
- - [concrete.ml.torch.hybrid_model.md](references/api/concrete.ml.torch.hybrid_model.md)
- - [concrete.ml.torch.md](references/api/concrete.ml.torch.md)
- - [concrete.ml.torch.numpy_module.md](references/api/concrete.ml.torch.numpy_module.md)
- - [concrete.ml.version.md](references/api/concrete.ml.version.md)
-
-
+- [Pandas support](references/pandas.md)
## Explanations
diff --git a/docs/built-in-models/encrypted_dataframe.md b/docs/built-in-models/encrypted_dataframe.md
new file mode 100644
index 000000000..e0f90f288
--- /dev/null
+++ b/docs/built-in-models/encrypted_dataframe.md
@@ -0,0 +1,116 @@
+# Working with Encrypted DataFrames
+
+Concrete ML builds upon the pandas data-frame functionality by introducing the capability to construct and perform operations on encrypted data-frames using FHE. This API ensures data scientists can leverage well-known pandas-like operations while maintaining privacy throughout the whole process.
+
+Encrypted data-frames are a storage format for encrypted tabular data and they can be exchanged with third-parties without security risks.
+
+Potential applications include:
+
+- Encrypted storage of tabular datasets
+- Joint data analysis efforts between multiple parties
+- Data preparation steps before machine learning tasks, such as inference or training
+- Secure outsourcing of data analysis to untrusted third parties
+
+## Encrypt and Decrypt a DataFrame
+
+To encrypt a pandas `DataFrame`, you must construct a `ClientEngine` which manages keys. Then call the `encrypt_from_pandas` function:
+
+```python
+from concrete.ml.pandas import ClientEngine
+from io import StringIO
+import pandas
+
+data_left = """index,total_bill,tip,sex,smoker
+1,12.54,2.5,Male,No
+2,11.17,1.5,Female,No
+3,20.29,2.75,Female,No
+"""
+
+# Load your pandas DataFrame
+df = pandas.read_csv(StringIO(data_left))
+
+# Obtain client object
+client = ClientEngine(keys_path="my_keys")
+
+# Encrypt the DataFrame
+df_encrypted = client.encrypt_from_pandas(df)
+
+# Decrypt the DataFrame to produce a pandas DataFrame
+df_decrypted = client.decrypt_to_pandas(df_encrypted)
+```
+
+## Supported Data Types and Schema Definition
+
+Concrete ML's encrypted `DataFrame` operations support a specific set of data types:
+
+- **Integer**: Integers are supported within a specific range determined by the encryption scheme's quantization parameters. Default range is 1 to 15. 0 being used for the `NaN`. Values outside this range will cause a `ValueError` to be raised during the pre-processing stage.
+- **Quantized Float**: Floating-point numbers are quantized to integers within the supported range. This is achieved by computing a scale and zero point for each column, which are used to map the floating-point numbers to the quantized integer space.
+- **String Enum**: String columns are mapped to integers starting from 1. This mapping is stored and later used for de-quantization. If the number of unique strings exceeds 15, a `ValueError` is raised.
+
+## Supported Operations on Encrypted Data-frames
+
+> **Outsourced execution**: The merge operation on Encrypted DataFrames can be **securely** performed on a third-party server. This means that the server can execute the merge without ever having access to the unencrypted data. The server only requires the encrypted DataFrames.
+
+Encrypted DataFrames support a subset of operations that are available for pandas DataFrames. The following operations are currently supported:
+
+- `merge`: left or right join two data-frames
+
+
+
+```python
+df_right = """index,day,time,size
+2,Thur,Lunch,2
+5,Sat,Dinner,3
+9,Sun,Dinner,2"""
+
+# Encrypt the DataFrame
+df_encrypted2 = client.encrypt_from_pandas(pandas.read_csv(StringIO(df_right)))
+
+df_encrypted_merged = df_encrypted.merge(df_encrypted2, how="left", on="index")
+```
+
+## Serialization of Encrypted Data-frames
+
+Encrypted `DataFrame` objects can be serialized to a file format for storage or transfer. When serialized, they contain the encrypted data and [evaluation keys](../getting-started/concepts.md#cryptography-concepts) necessary to perform computations.
+
+> **Security**: Serialized data-frames do not contain any secret keys. The data-frames can be exchanged with any third-party without any risk.
+
+### Saving and loading Data-frames
+
+To save or load an encrypted `DataFrame` from a file, use the following commands:
+
+
+
+```python
+from concrete.ml.pandas import load_encrypted_dataframe
+
+# Save
+df_encrypted_merged.save("df_encrypted_merged")
+
+# Load
+df_encrypted_merged = load_encrypted_dataframe("df_encrypted_merged")
+
+# Decrypt the DataFrame
+df_decrypted = client.decrypt_to_pandas(df_encrypted)
+```
+
+## Error Handling
+
+The library is designed to raise specific errors when encountering issues during the pre-processing and post-processing stages:
+
+- `ValueError`: Raised when a column contains values outside the allowed range for integers, when there are too many unique strings, or when encountering an unsupported data type. Raised also when an operation is attempted on a data type that is not supported by the operation.
+
+## Example Workflow
+
+An example workflow where two clients encrypt two `DataFrame` objects, perform a merge operation on the server side, and then decrypt the results is available in the notebook [encrypted_pandas.ipynb](../advanced_examples/EncryptedPandas.ipynb).
+
+## Current Limitations
+
+While this API offers a new secure way to work on remotely stored and encrypted data, it has some strong limitations at the moment:
+
+- **Precision of Values**: The precision for numerical values is limited to 4 bits.
+- **Supported Operations**: The `merge` operation is the only one available.
+- **Index Handling**: Index values are not preserved; users should move any relevant data from the index to a dedicated new column before encrypting.
+- **Integer Range**: The range of integers that can be encrypted is between 1 and 15.
+- **Uniqueness for `merge`**: The `merge` operation requires that the columns to merge on contain unique values. Currently this means that data-frames are limited to 15 rows.
+- **Metadata Security**: Column names and the mapping of strings to integers are not encrypted and are sent to the server in clear text.
diff --git a/docs/getting-started/README.md b/docs/getting-started/README.md
index 5202c2dc7..e039397d6 100644
--- a/docs/getting-started/README.md
+++ b/docs/getting-started/README.md
@@ -2,7 +2,11 @@
-Concrete ML is an open source, privacy-preserving, machine learning framework based on Fully Homomorphic Encryption (FHE). It enables data scientists without any prior knowledge of cryptography to automatically turn machine learning models into their FHE equivalent, using familiar APIs from scikit-learn and PyTorch (see how it looks for [linear models](../built-in-models/linear.md), [tree-based models](../built-in-models/tree.md), and [neural networks](../built-in-models/neural-networks.md)). Concrete ML supports converting models for inference with FHE but can also [train some models](../built-in-models/training.md) on encrypted data.
+Concrete ML is an open source, privacy-preserving, machine learning framework based on Fully Homomorphic Encryption (FHE). It enables data scientists without any prior knowledge of cryptography to:
+
+- automatically turn machine learning models into their FHE equivalent, using familiar APIs from scikit-learn and PyTorch (see how this works for [linear models](../built-in-models/linear.md), [tree-based models](../built-in-models/tree.md), and [neural networks](../built-in-models/neural-networks.md)).
+- [train models](../built-in-models/training.md) on encrypted data.
+- [pre-process encrypted data](../built-in-models/encrypted_dataframe.md) through a data-frame paradigm
Fully Homomorphic Encryption is an encryption technique that allows computing directly on encrypted data, without needing to decrypt it. With FHE, you can build private-by-design applications without compromising on features. You can learn more about FHE in [this introduction](https://www.zama.ai/post/tfhe-deep-dive-part-1) or by joining the [FHE.org](https://fhe.org) community.
diff --git a/docs/getting-started/cloud.md b/docs/getting-started/cloud.md
index b825c02b1..5775b17a3 100644
--- a/docs/getting-started/cloud.md
+++ b/docs/getting-started/cloud.md
@@ -1,8 +1,8 @@
-# Inference in the cloud
+# Working in the cloud
-Concrete ML models can be easily deployed in a client/server setting, enabling the creation of privacy-preserving services in the cloud.
+Concrete ML models and data-frames can be easily deployed in a client/server setting, enabling the creation of privacy-preserving services in the cloud.
-As seen in the [concepts section](concepts.md), once compiled to FHE, a Concrete ML model generates machine code that performs the inference on private data. _Secret_ encryption keys are needed so that the user can securely encrypt their data and decrypt the inference result. An _evaluation_ key is also needed for the server to securely process the user's encrypted data.
+As seen in the [concepts section](concepts.md), once compiled to FHE, a Concrete ML model or data-frame generates machine code that execute prediction, training or pre-processing on encrypted data. _Secret_ encryption keys are needed so that the user can securely encrypt their data and decrypt the execution result. An _evaluation_ key is also needed for the server to securely process the user's encrypted data.
Keys are generated by the user _once_ for each service they use, based on the model the service provides and its cryptographic parameters.
@@ -12,10 +12,10 @@ The overall communications protocol to enable cloud deployment of machine learni
The steps detailed above are:
-1. The model developer deploys the compiled machine learning model to the server. This model includes the cryptographic parameters. The server is now ready to provide private inference.
+1. The model developer deploys the compiled machine learning model to the server. This model includes the cryptographic parameters. The server is now ready to provide private inference. Crypto-graphic parameters and compiled programs for data-frames are included directly in Concrete ML.
1. The client requests the cryptographic parameters (also called "client specs"). Once it receives them from the server, the _secret_ and _evaluation_ keys are generated.
-1. The client sends the _evaluation_ key to the server. The server is now ready to accept requests from this client. The client sends their encrypted data.
-1. The server uses the _evaluation_ key to securely run inference on the user's data and sends back the encrypted result.
+1. The client sends the _evaluation_ key to the server. The server is now ready to accept requests from this client. The client sends their encrypted data. Serialized data-frames include client evaluation keys.
+1. The server uses the _evaluation_ key to securely run prediction, training and pre-processing on the user's data and sends back the encrypted result.
1. The client now decrypts the result and can send back new requests.
-For more information on how to implement this basic secure inference protocol, refer to the [Production Deployment section](../guides/client_server.md) and to the [client/server example](../advanced_examples/ClientServer.ipynb).
+For more information on how to implement this basic secure inference protocol, refer to the [Production Deployment section](../guides/client_server.md) and to the [client/server example](../advanced_examples/ClientServer.ipynb). For information on training on encrypted data, see [the corresponding section](../built-in-models/training.md).
diff --git a/docs/getting-started/concepts.md b/docs/getting-started/concepts.md
index 9b1793db9..83311c3af 100644
--- a/docs/getting-started/concepts.md
+++ b/docs/getting-started/concepts.md
@@ -4,9 +4,13 @@ Concrete ML is built on top of Concrete, which enables NumPy programs to be conv
## Lifecycle of a Concrete ML model
+Concrete ML models can be trained on clear or encrypted data, then deployed to predict on encrypted inputs. During deployment
+data pre-processing can be done on encrypted data. Therefore, data can be encrypted during the entire lifecycle of the machine
+learning model, with some limitations.
+
### I. Model development
-1. **training:** A model is trained using plaintext, non-encrypted, training data.
+1. **training:** A model is trained either using plaintext, non-encrypted, training data, or encrypted training data.
1. **quantization:** The model is converted into an integer equivalent using quantization. Concrete ML performs this step either during training (Quantization Aware Training) or after training (Post-training Quantization), depending on model type. Quantization converts inputs, model weights, and all intermediate values of the inference computation to integers. More information is available [here](../explanations/quantization.md).
1. **simulation:** Testing FHE models on very large data-sets can take a long time. Furthermore, not all models are compatible with FHE constraints out of the box. Simulation allows you to execute a model that was quantized, to measure the accuracy it would have in FHE, but also to determine the modifications required to make it FHE compatible. Simulation is described in more detail [here](../explanations/compilation.md#fhe-simulation).
1. **compilation:** Once the model is quantized, simulation can confirm it has good accuracy in FHE. The model then needs to be compiled using Concrete's FHE Compiler to produce an equivalent FHE circuit. This circuit is represented as an MLIR program consisting of low level cryptographic operations. You can read more about FHE compilation [here](../explanations/compilation.md), MLIR [here](https://mlir.llvm.org/), and about the low-level Concrete library [here](https://docs.zama.ai/concrete-core).
@@ -16,7 +20,11 @@ You can find examples of the model development workflow [here](../tutorials/ml_e
### II. Model deployment
-1. **client/server deployment:** In a client/server setting, the model can be exported in a way that:
+1. **pre-processing:** Data owners can encrypt data and store it in a [data-frame](../built-in-models/encrypted_dataframe.md) for
+ further processing on a server. The server can pre-process such data, to prepare it for encrypted training or inference. Clients
+ generate keys, encrypt and decrypt data-frames, while the Concrete ML-enabled server has pre-compiled circuits
+ that perform pre-processing.
+1. **client/server model deployment:** In a client/server setting, Concrete ML models can be exported in a way that:
- allows the client to generate keys, encrypt, and decrypt.
- provides a compiled model that can run on the server to perform inference on encrypted data.
1. **key generation:** The data owner (client) needs to generate a set of keys: a private key (to encrypt/decrypt their data and results) and a public evaluation key (for the model's FHE evaluation on the server).
diff --git a/docs/index.toc.txt b/docs/index.toc.txt
index 5b8a45c79..47e060adf 100644
--- a/docs/index.toc.txt
+++ b/docs/index.toc.txt
@@ -23,7 +23,8 @@
built-in-models/tree.md
built-in-models/neural-networks.md
built-in-models/nearest-neighbors.md
- built-in-models/pandas.md
+ built-in-models/encrypted_dataframe.md
+ references/pandas.md
built-in-models/training.md
.. toctree::
diff --git a/docs/references/api/README.md b/docs/references/api/README.md
index 09fb0eae6..f31c0754d 100644
--- a/docs/references/api/README.md
+++ b/docs/references/api/README.md
@@ -26,6 +26,9 @@
- [`concrete.ml.onnx.onnx_model_manipulations`](./concrete.ml.onnx.onnx_model_manipulations.md#module-concretemlonnxonnx_model_manipulations): Some code to manipulate models.
- [`concrete.ml.onnx.onnx_utils`](./concrete.ml.onnx.onnx_utils.md#module-concretemlonnxonnx_utils): Utils to interpret an ONNX model with numpy.
- [`concrete.ml.onnx.ops_impl`](./concrete.ml.onnx.ops_impl.md#module-concretemlonnxops_impl): ONNX ops implementation in Python + NumPy.
+- [`concrete.ml.pandas`](./concrete.ml.pandas.md#module-concretemlpandas): Public API for encrypted data-frames.
+- [`concrete.ml.pandas.client_engine`](./concrete.ml.pandas.client_engine.md#module-concretemlpandasclient_engine): Define the framework used for managing keys (encrypt, decrypt) for encrypted data-frames.
+- [`concrete.ml.pandas.dataframe`](./concrete.ml.pandas.dataframe.md#module-concretemlpandasdataframe): Define the encrypted data-frame framework.
- [`concrete.ml.pytest`](./concrete.ml.pytest.md#module-concretemlpytest): Module which is used to contain common functions for pytest.
- [`concrete.ml.pytest.torch_models`](./concrete.ml.pytest.torch_models.md#module-concretemlpytesttorch_models): Torch modules for our pytests.
- [`concrete.ml.pytest.utils`](./concrete.ml.pytest.utils.md#module-concretemlpytestutils): Common functions or lists for test files, which can't be put in fixtures.
@@ -67,6 +70,8 @@
- [`fhe_client_server.FHEModelServer`](./concrete.ml.deployment.fhe_client_server.md#class-fhemodelserver): Server API to load and run the FHE circuit.
- [`ops_impl.ONNXMixedFunction`](./concrete.ml.onnx.ops_impl.md#class-onnxmixedfunction): A mixed quantized-raw valued onnx function.
- [`ops_impl.RawOpOutput`](./concrete.ml.onnx.ops_impl.md#class-rawopoutput): Type construct that marks an ndarray as a raw output of a quantized op.
+- [`client_engine.ClientEngine`](./concrete.ml.pandas.client_engine.md#class-clientengine): Define a framework that manages keys.
+- [`dataframe.EncryptedDataFrame`](./concrete.ml.pandas.dataframe.md#class-encrypteddataframe): Define an encrypted data-frame framework that supports Pandas operators and parameters.
- [`torch_models.AddNet`](./concrete.ml.pytest.torch_models.md#class-addnet): Torch model that performs a simple addition between two inputs.
- [`torch_models.BranchingGemmModule`](./concrete.ml.pytest.torch_models.md#class-branchinggemmmodule): Torch model with some branching and skip connections.
- [`torch_models.BranchingModule`](./concrete.ml.pytest.torch_models.md#class-branchingmodule): Torch model with some branching and skip connections.
@@ -364,6 +369,8 @@
- [`ops_impl.rounded_numpy_equal_for_trees`](./concrete.ml.onnx.ops_impl.md#function-rounded_numpy_equal_for_trees): Compute rounded equal in numpy according to ONNX spec for tree-based models only.
- [`ops_impl.rounded_numpy_less_for_trees`](./concrete.ml.onnx.ops_impl.md#function-rounded_numpy_less_for_trees): Compute rounded less in numpy according to ONNX spec for tree-based models only.
- [`ops_impl.rounded_numpy_less_or_equal_for_trees`](./concrete.ml.onnx.ops_impl.md#function-rounded_numpy_less_or_equal_for_trees): Compute rounded less or equal in numpy according to ONNX spec for tree-based models only.
+- [`pandas.load_encrypted_dataframe`](./concrete.ml.pandas.md#function-load_encrypted_dataframe): Load a serialized encrypted data-frame.
+- [`pandas.merge`](./concrete.ml.pandas.md#function-merge): Merge two encrypted data-frames in FHE using Pandas parameters.
- [`utils.check_serialization`](./concrete.ml.pytest.utils.md#function-check_serialization): Check that the given object can properly be serialized.
- [`utils.data_calibration_processing`](./concrete.ml.pytest.utils.md#function-data_calibration_processing): Reduce size of the given data-set.
- [`utils.get_random_samples`](./concrete.ml.pytest.utils.md#function-get_random_samples): Select `n_sample` random elements from a 2D NumPy array.
@@ -374,6 +381,7 @@
- [`utils.get_sklearn_tree_models_and_datasets`](./concrete.ml.pytest.utils.md#function-get_sklearn_tree_models_and_datasets): Get the pytest parameters to use for testing tree-based models.
- [`utils.instantiate_model_generic`](./concrete.ml.pytest.utils.md#function-instantiate_model_generic): Instantiate any Concrete ML model type.
- [`utils.load_torch_model`](./concrete.ml.pytest.utils.md#function-load_torch_model): Load an object saved with torch.save() from a file or dict.
+- [`utils.pandas_dataframe_are_equal`](./concrete.ml.pytest.utils.md#function-pandas_dataframe_are_equal): Determine if both data-frames are identical.
- [`utils.values_are_equal`](./concrete.ml.pytest.utils.md#function-values_are_equal): Indicate if two values are equal.
- [`post_training.get_n_bits_dict`](./concrete.ml.quantization.post_training.md#function-get_n_bits_dict): Convert the n_bits parameter into a proper dictionary.
- [`quantizers.fill_from_kwargs`](./concrete.ml.quantization.quantizers.md#function-fill_from_kwargs): Fill a parameter set structure from kwargs parameters.
diff --git a/docs/references/api/concrete.ml.common.utils.md b/docs/references/api/concrete.ml.common.utils.md
index c48051f30..227e5d0df 100644
--- a/docs/references/api/concrete.ml.common.utils.md
+++ b/docs/references/api/concrete.ml.common.utils.md
@@ -17,7 +17,7 @@ Utils that can be re-used by other pieces of code in the module.
______________________________________________________________________
-
+
## function `replace_invalid_arg_name_chars`
@@ -39,7 +39,7 @@ This does not check that the starting character of arg_name is valid.
______________________________________________________________________
-
+
## function `generate_proxy_function`
@@ -65,7 +65,7 @@ This returns a runtime compiled function with the sanitized argument names passe
______________________________________________________________________
-
+
## function `get_onnx_opset_version`
@@ -85,7 +85,7 @@ Return the ONNX opset_version.
______________________________________________________________________
-
+
## function `manage_parameters_for_pbs_errors`
@@ -122,7 +122,7 @@ Note that global_p_error is currently set to 0 in the FHE simulation mode.
______________________________________________________________________
-
+
## function `check_there_is_no_p_error_options_in_configuration`
@@ -140,7 +140,7 @@ It would be dangerous, since we set them in direct arguments in our calls to Con
______________________________________________________________________
-
+
## function `get_model_class`
@@ -159,7 +159,7 @@ The model's class.
______________________________________________________________________
-
+
## function `is_model_class_in_a_list`
@@ -179,7 +179,7 @@ If the model's class is in the list or not.
______________________________________________________________________
-
+
## function `get_model_name`
@@ -198,7 +198,7 @@ the model's name.
______________________________________________________________________
-
+
## function `is_classifier_or_partial_classifier`
@@ -218,7 +218,7 @@ Indicate if the model class represents a classifier.
______________________________________________________________________
-
+
## function `is_regressor_or_partial_regressor`
@@ -238,7 +238,7 @@ Indicate if the model class represents a regressor.
______________________________________________________________________
-
+
## function `is_pandas_dataframe`
@@ -260,7 +260,7 @@ This function is inspired from Scikit-Learn's test validation tools and avoids t
______________________________________________________________________
-
+
## function `is_pandas_series`
@@ -282,7 +282,7 @@ This function is inspired from Scikit-Learn's test validation tools and avoids t
______________________________________________________________________
-
+
## function `is_pandas_type`
@@ -302,7 +302,7 @@ Indicate if the input container is a Pandas DataFrame or Series.
______________________________________________________________________
-
+
## function `check_dtype_and_cast`
@@ -334,7 +334,7 @@ If values types don't match with any supported type or the expected dtype, raise
______________________________________________________________________
-
+
## function `compute_bits_precision`
@@ -354,7 +354,7 @@ Compute the number of bits required to represent x.
______________________________________________________________________
-
+
## function `is_brevitas_model`
@@ -374,7 +374,7 @@ Check if a model is a Brevitas type.
______________________________________________________________________
-
+
## function `to_tuple`
@@ -394,7 +394,7 @@ Make the input a tuple if it is not already the case.
______________________________________________________________________
-
+
## function `all_values_are_integers`
@@ -414,7 +414,7 @@ Indicate if all unpacked values are of a supported integer dtype.
______________________________________________________________________
-
+
## function `all_values_are_floats`
@@ -434,7 +434,7 @@ Indicate if all unpacked values are of a supported float dtype.
______________________________________________________________________
-
+
## function `all_values_are_of_dtype`
@@ -455,7 +455,7 @@ Indicate if all unpacked values are of the specified dtype(s).
______________________________________________________________________
-
+
## function `array_allclose_and_same_shape`
@@ -485,7 +485,7 @@ Check if two numpy arrays are equal within a tolerances and have the same shape.
______________________________________________________________________
-
+
## class `FheMode`
diff --git a/docs/references/api/concrete.ml.pandas.client_engine.md b/docs/references/api/concrete.ml.pandas.client_engine.md
new file mode 100644
index 000000000..dd318611f
--- /dev/null
+++ b/docs/references/api/concrete.ml.pandas.client_engine.md
@@ -0,0 +1,83 @@
+
+
+
+
+# module `concrete.ml.pandas.client_engine`
+
+Define the framework used for managing keys (encrypt, decrypt) for encrypted data-frames.
+
+## **Global Variables**
+
+- **CURRENT_API_VERSION**
+
+______________________________________________________________________
+
+
+
+## class `ClientEngine`
+
+Define a framework that manages keys.
+
+
+
+### method `__init__`
+
+```python
+__init__(keygen: bool = True, keys_path: Optional[Path, str] = None)
+```
+
+______________________________________________________________________
+
+
+
+### method `decrypt_to_pandas`
+
+```python
+decrypt_to_pandas(encrypted_dataframe: EncryptedDataFrame) → DataFrame
+```
+
+Decrypt an encrypted data-frame using the loaded client and return a Pandas data-frame.
+
+**Args:**
+
+- `encrypted_dataframe` (EncryptedDataFrame): The encrypted data-frame to decrypt.
+
+**Returns:**
+
+- `pandas.DataFrame`: The Pandas data-frame built on the decrypted values.
+
+______________________________________________________________________
+
+
+
+### method `encrypt_from_pandas`
+
+```python
+encrypt_from_pandas(pandas_dataframe: DataFrame) → EncryptedDataFrame
+```
+
+Encrypt a Pandas data-frame using the loaded client.
+
+**Args:**
+
+- `pandas_dataframe` (DataFrame): The Pandas data-frame to encrypt.
+
+**Returns:**
+
+- `EncryptedDataFrame`: The encrypted data-frame.
+
+______________________________________________________________________
+
+
+
+### method `keygen`
+
+```python
+keygen(keys_path: Optional[Path, str] = None)
+```
+
+Generate the keys.
+
+**Args:**
+
+- `keys_path` (Optional\[Union\[Path, str\]\]): The path where to save the keys. Note that if some keys already exist in that path, the client will use them instead of generating new ones. Default to None.
diff --git a/docs/references/api/concrete.ml.pandas.dataframe.md b/docs/references/api/concrete.ml.pandas.dataframe.md
new file mode 100644
index 000000000..300daee44
--- /dev/null
+++ b/docs/references/api/concrete.ml.pandas.dataframe.md
@@ -0,0 +1,204 @@
+
+
+
+
+# module `concrete.ml.pandas.dataframe`
+
+Define the encrypted data-frame framework.
+
+______________________________________________________________________
+
+
+
+## class `EncryptedDataFrame`
+
+Define an encrypted data-frame framework that supports Pandas operators and parameters.
+
+
+
+### method `__init__`
+
+```python
+__init__(
+ encrypted_values: ndarray,
+ encrypted_nan: Value,
+ evaluation_keys: EvaluationKeys,
+ column_names: List[str],
+ dtype_mappings: Dict,
+ api_version: int
+)
+```
+
+______________________________________________________________________
+
+#### property api_version
+
+Get the API version used when instantiating this instance.
+
+**Returns:**
+
+- `int`: The data-frame's API version.
+
+______________________________________________________________________
+
+#### property column_names
+
+Get the data-frame's column names in order.
+
+**Returns:**
+
+- `List[str]`: The data-frame's column names in order.
+
+______________________________________________________________________
+
+#### property column_names_to_position
+
+Get the mapping between each column's name and its index position.
+
+**Returns:**
+
+- `Dict[str, int]`: Mapping between column names and their position.
+
+______________________________________________________________________
+
+#### property dtype_mappings
+
+Get the mappings for non-integer dtypes used in pre and post-processing.
+
+**Returns:**
+
+- `Dict`: The mappings for non-integers dtypes.
+
+______________________________________________________________________
+
+#### property encrypted_nan
+
+Get the encrypted value representing a NaN.
+
+**Returns:**
+
+- `fhe.Value`: The encrypted representation of a NaN.
+
+______________________________________________________________________
+
+#### property encrypted_values
+
+Get the encrypted values.
+
+**Returns:**
+
+- `numpy.ndarray`: The array containing all encrypted values.
+
+______________________________________________________________________
+
+#### property evaluation_keys
+
+Get the evaluation keys.
+
+**Returns:**
+
+- `fhe.EvaluationKeys`: The evaluation keys.
+
+______________________________________________________________________
+
+
+
+### method `get_schema`
+
+```python
+get_schema() → DataFrame
+```
+
+Get the encrypted data-frame's scheme.
+
+The scheme can include column names, dtypes or dtype mappings. It is displayed as a Pandas data-frame for better readability.
+
+**Returns:**
+
+- `pandas.DataFrame`: The encrypted data-frame's scheme.
+
+______________________________________________________________________
+
+
+
+### classmethod `load`
+
+```python
+load(path: Union[Path, str])
+```
+
+Load an encrypted data-frame from disk.
+
+**Args:**
+
+- `path` (Union\[Path, str\]): The path where to load the encrypted data-frame.
+
+**Returns:**
+
+- `EncryptedDataFrame`: The loaded encrypted data-frame.
+
+______________________________________________________________________
+
+
+
+### method `merge`
+
+```python
+merge(
+ other,
+ how: str = 'left',
+ on: Optional[str] = None,
+ left_on: Optional[Hashable, Sequence[Hashable]] = None,
+ right_on: Optional[Hashable, Sequence[Hashable]] = None,
+ left_index: bool = False,
+ right_index: bool = False,
+ sort: bool = False,
+ suffixes: Tuple[Optional[str], Optional[str]] = ('_x', '_y'),
+ copy: Optional[bool] = None,
+ indicator: Union[bool, str] = False,
+ validate: Optional[str] = None
+)
+```
+
+Merge two encrypted data-frames in FHE using Pandas parameters.
+
+Note that for now, only a left and right join is implemented. Additionally, only some Pandas parameters are supported, and joining on multiple columns is not available.
+
+Pandas documentation for version 2.0 can be found here: https://pandas.pydata.org/pandas-docs/version/2.0/reference/api/pandas.DataFrame.merge.html
+
+**Args:**
+
+- `other` (EncryptedDataFrame): The other encrypted data-frame.
+- `how` (str): Type of merge to be performed, one of {'left', 'right'}.
+- `* left`: use only keys from left frame, similar to a SQL left outer join; preserve key order.
+- `* right`: use only keys from right frame, similar to a SQL right outer join; preserve key order.
+- `on` (Optional\[str\]): Column name to join on. These must be found in both DataFrames. If it is None then this defaults to the intersection of the columns in both DataFrames. Default to None.
+- `left_on` (Optional\[Union\[Hashable, Sequence\[Hashable\]\]\]): Currently not supported, please keep the default value. Default to None.
+- `right_on` (Optional\[Union\[Hashable, Sequence\[Hashable\]\]\]): Currently not supported, please keep the default value. Default to None.
+- `left_index` (bool): Currently not supported, please keep the default value. Default to False.
+- `right_index` (bool): Currently not supported, please keep the default value. Default to False.
+- `sort` (bool): Currently not supported, please keep the default value. Default to False.
+- `suffixes` (Tuple\[Optional\[str\], Optional\[str\]\]): A length-2 sequence where each element is optionally a string indicating the suffix to add to overlapping column names in `left` and `right` respectively. Pass a value of `None` instead of a string to indicate that the column name from `left` or `right` should be left as-is, with no suffix. At least one of the values must not be None.. Default to ("\_x", "\_y").
+- `copy` (Optional\[bool\]): Currently not supported, please keep the default value. Default to None.
+- `indicator` (Union\[bool, str\]): Currently not supported, please keep the default value. Default to False.
+- `validate` (Optional\[str\]): Currently not supported, please keep the default value. Default to None.
+
+**Returns:**
+
+- `EncryptedDataFrame`: The joined encrypted data-frame.
+
+______________________________________________________________________
+
+
+
+### method `save`
+
+```python
+save(path: Union[Path, str])
+```
+
+Save the encrypted data-frame on disk.
+
+**Args:**
+
+- `path` (Union\[Path, str\]): The path where to save the encrypted data-frame.
diff --git a/docs/references/api/concrete.ml.pandas.md b/docs/references/api/concrete.ml.pandas.md
new file mode 100644
index 000000000..ed30f4551
--- /dev/null
+++ b/docs/references/api/concrete.ml.pandas.md
@@ -0,0 +1,84 @@
+
+
+
+
+# module `concrete.ml.pandas`
+
+Public API for encrypted data-frames.
+
+## **Global Variables**
+
+- **dataframe**
+- **client_engine**
+
+______________________________________________________________________
+
+
+
+## function `load_encrypted_dataframe`
+
+```python
+load_encrypted_dataframe(path: Union[Path, str]) → EncryptedDataFrame
+```
+
+Load a serialized encrypted data-frame.
+
+**Args:**
+
+- `path` (Union\[Path, str\]): The path to consider for loading the serialized encrypted data-frame.
+
+**Returns:**
+
+- `EncryptedDataFrame`: The loaded encrypted data-frame.
+
+______________________________________________________________________
+
+
+
+## function `merge`
+
+```python
+merge(
+ left_encrypted: EncryptedDataFrame,
+ right_encrypted: EncryptedDataFrame,
+ how: str = 'left',
+ on: Optional[str] = None,
+ left_on: Optional[Hashable, Sequence[Hashable]] = None,
+ right_on: Optional[Hashable, Sequence[Hashable]] = None,
+ left_index: bool = False,
+ right_index: bool = False,
+ sort: bool = False,
+ suffixes: Tuple[Optional[str], Optional[str]] = ('_x', '_y'),
+ copy: Optional[bool] = None,
+ indicator: Union[bool, str] = False,
+ validate: Optional[str] = None
+) → EncryptedDataFrame
+```
+
+Merge two encrypted data-frames in FHE using Pandas parameters.
+
+Note that for now, only a left and right join is implemented. Additionally, only some Pandas parameters are supported, and joining on multiple columns is not available.
+
+Pandas documentation for version 2.0 can be found here: https://pandas.pydata.org/pandas-docs/version/2.0/reference/api/pandas.DataFrame.merge.html
+
+**Args:**
+
+- `left_encrypted` (EncryptedDataFrame): The left encrypted data-frame.
+- `right_encrypted` (EncryptedDataFrame): The right encrypted data-frame.
+- `how` (str): Type of merge to be performed, one of {'left', 'right'}.
+- `* left`: use only keys from left frame, similar to a SQL left outer join; preserve key order.
+- `* right`: use only keys from right frame, similar to a SQL right outer join; preserve key order.
+- `on` (Optional\[str\]): Column name to join on. These must be found in both DataFrames. If it is None then this defaults to the intersection of the columns in both DataFrames. Default to None.
+- `left_on` (Optional\[Union\[Hashable, Sequence\[Hashable\]\]\]): Currently not supported, please keep the default value. Default to None.
+- `right_on` (Optional\[Union\[Hashable, Sequence\[Hashable\]\]\]): Currently not supported, please keep the default value. Default to None.
+- `left_index` (bool): Currently not supported, please keep the default value. Default to False.
+- `right_index` (bool): Currently not supported, please keep the default value. Default to False.
+- `sort` (bool): Currently not supported, please keep the default value. Default to False.
+- `suffixes` (Tuple\[Optional\[str\], Optional\[str\]\]): A length-2 sequence where each element is optionally a string indicating the suffix to add to overlapping column names in `left` and `right` respectively. Pass a value of `None` instead of a string to indicate that the column name from `left` or `right` should be left as-is, with no suffix. At least one of the values must not be None.. Default to ("\_x", "\_y").
+- `copy` (Optional\[bool\]): Currently not supported, please keep the default value. Default to None.
+- `indicator` (Union\[bool, str\]): Currently not supported, please keep the default value. Default to False.
+- `validate` (Optional\[str\]): Currently not supported, please keep the default value. Default to None.
+
+**Returns:**
+
+- `EncryptedDataFrame`: The joined encrypted data-frame.
diff --git a/docs/references/api/concrete.ml.pytest.utils.md b/docs/references/api/concrete.ml.pytest.utils.md
index ed78f3104..06c48da35 100644
--- a/docs/references/api/concrete.ml.pytest.utils.md
+++ b/docs/references/api/concrete.ml.pytest.utils.md
@@ -13,7 +13,7 @@ Common functions or lists for test files, which can't be put in fixtures.
______________________________________________________________________
-
+
## function `get_sklearn_linear_models_and_datasets`
@@ -43,7 +43,7 @@ Get the pytest parameters to use for testing linear models.
______________________________________________________________________
-
+
## function `get_sklearn_tree_models_and_datasets`
@@ -73,7 +73,7 @@ Get the pytest parameters to use for testing tree-based models.
______________________________________________________________________
-
+
## function `get_sklearn_neural_net_models_and_datasets`
@@ -103,7 +103,7 @@ Get the pytest parameters to use for testing neural network models.
______________________________________________________________________
-
+
## function `get_sklearn_neighbors_models_and_datasets`
@@ -133,7 +133,7 @@ Get the pytest parameters to use for testing neighbor models.
______________________________________________________________________
-
+
## function `get_sklearn_all_models_and_datasets`
@@ -163,7 +163,7 @@ Get the pytest parameters to use for testing all models available in Concrete ML
______________________________________________________________________
-
+
## function `instantiate_model_generic`
@@ -186,7 +186,7 @@ Instantiate any Concrete ML model type.
______________________________________________________________________
-
+
## function `data_calibration_processing`
@@ -212,7 +212,7 @@ Reduce size of the given data-set.
______________________________________________________________________
-
+
## function `load_torch_model`
@@ -240,7 +240,7 @@ Load an object saved with torch.save() from a file or dict.
______________________________________________________________________
-
+
## function `values_are_equal`
@@ -263,7 +263,7 @@ This method takes into account objects of type None, numpy.ndarray, numpy.floati
______________________________________________________________________
-
+
## function `check_serialization`
@@ -289,7 +289,7 @@ This function serializes all objects using the `dump`, `dumps`, `load` and `load
______________________________________________________________________
-
+
## function `get_random_samples`
@@ -311,3 +311,33 @@ Select `n_sample` random elements from a 2D NumPy array.
**Raises:**
- `AssertionError`: If `n_sample` is not within the range (0, x.shape\[0\]) or if `x` is not a 2D array.
+
+______________________________________________________________________
+
+
+
+## function `pandas_dataframe_are_equal`
+
+```python
+pandas_dataframe_are_equal(
+ df_1: DataFrame,
+ df_2: DataFrame,
+ float_rtol: float = 1e-05,
+ float_atol: float = 1e-08,
+ equal_nan: bool = False
+)
+```
+
+Determine if both data-frames are identical.
+
+**Args:**
+
+- `df_1` (pandas.DataFrame): The first data-frame to consider.
+- `df_2` (pandas.DataFrame): The second data-frame to consider.
+- `float_rtol` (float): Numpy's relative tolerance parameter to use when comparing columns with floating point values. Default to 1.e-5.
+- `float_atol` (float): Numpy's absolute tolerance parameter to use when comparing columns with floating point values. Default to 1.e-8.
+- `equal_nan` (bool): Whether to compare NaN values as equal. Default to False.
+
+**Returns:**
+
+- `Bool`: Wether both data-frames are equal.
diff --git a/docs/references/api/concrete.ml.quantization.base_quantized_op.md b/docs/references/api/concrete.ml.quantization.base_quantized_op.md
index faecefe79..40365fb39 100644
--- a/docs/references/api/concrete.ml.quantization.base_quantized_op.md
+++ b/docs/references/api/concrete.ml.quantization.base_quantized_op.md
@@ -15,7 +15,7 @@ Base Quantized Op class that implements quantization for a float numpy op.
______________________________________________________________________
-
+
## class `QuantizedOp`
@@ -29,7 +29,7 @@ Base class for quantized ONNX ops implemented in numpy.
- `constant_inputs` (Optional\[Union\[Dict\[str, Any\], Dict\[int, Any\]\]\]): The constant tensors that are inputs to this op
- `input_quant_opts` (QuantizationOptions): Input quantizer options, determine the quantization that is applied to input tensors (that are not constants)
-
+
### method `__init__`
@@ -56,7 +56,7 @@ Get the names of encrypted integer tensors that are used by this op.
______________________________________________________________________
-
+
### method `calibrate`
@@ -76,7 +76,7 @@ Create corresponding QuantizedArray for the output of the activation function.
______________________________________________________________________
-
+
### method `call_impl`
@@ -97,7 +97,7 @@ Call self.impl to centralize mypy bug workaround.
______________________________________________________________________
-
+
### method `can_fuse`
@@ -115,7 +115,7 @@ This function shall be overloaded by inheriting classes to test self.\_int_input
______________________________________________________________________
-
+
### method `dump`
@@ -131,7 +131,7 @@ Dump itself to a file.
______________________________________________________________________
-
+
### method `dump_dict`
@@ -147,7 +147,7 @@ Dump itself to a dict.
______________________________________________________________________
-
+
### method `dumps`
@@ -163,7 +163,7 @@ Dump itself to a string.
______________________________________________________________________
-
+
### method `load_dict`
@@ -183,7 +183,7 @@ Load itself from a string.
______________________________________________________________________
-
+
### classmethod `must_quantize_input`
@@ -205,7 +205,7 @@ Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be eithe
______________________________________________________________________
-
+
### classmethod `op_type`
@@ -221,7 +221,7 @@ Get the type of this operation.
______________________________________________________________________
-
+
### method `prepare_output`
@@ -243,7 +243,7 @@ The calibrate method needs to be called with sample data before using this funct
______________________________________________________________________
-
+
### method `q_impl`
@@ -267,7 +267,7 @@ Execute the quantized forward.
______________________________________________________________________
-
+
## class `QuantizedOpUnivariateOfEncrypted`
@@ -275,7 +275,7 @@ An univariate operator of an encrypted value.
This operation is not really operating as a quantized operation. It is useful when the computations get fused into a TLU, as in e.g., Act(x) = x || (x + 42)).
-
+
### method `__init__`
@@ -302,7 +302,7 @@ Get the names of encrypted integer tensors that are used by this op.
______________________________________________________________________
-
+
### method `calibrate`
@@ -322,7 +322,7 @@ Create corresponding QuantizedArray for the output of the activation function.
______________________________________________________________________
-
+
### method `call_impl`
@@ -343,7 +343,7 @@ Call self.impl to centralize mypy bug workaround.
______________________________________________________________________
-
+
### method `can_fuse`
@@ -361,7 +361,7 @@ This operation can be fused and computed in float when a single integer tensor g
______________________________________________________________________
-
+
### method `dump`
@@ -377,7 +377,7 @@ Dump itself to a file.
______________________________________________________________________
-
+
### method `dump_dict`
@@ -393,7 +393,7 @@ Dump itself to a dict.
______________________________________________________________________
-
+
### method `dumps`
@@ -409,7 +409,7 @@ Dump itself to a string.
______________________________________________________________________
-
+
### method `load_dict`
@@ -429,7 +429,7 @@ Load itself from a string.
______________________________________________________________________
-
+
### classmethod `must_quantize_input`
@@ -451,7 +451,7 @@ Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be eithe
______________________________________________________________________
-
+
### classmethod `op_type`
@@ -467,7 +467,7 @@ Get the type of this operation.
______________________________________________________________________
-
+
### method `prepare_output`
@@ -489,7 +489,7 @@ The calibrate method needs to be called with sample data before using this funct
______________________________________________________________________
-
+
### method `q_impl`
@@ -513,7 +513,7 @@ Execute the quantized forward.
______________________________________________________________________
-
+
## class `QuantizedMixingOp`
@@ -521,19 +521,23 @@ An operator that mixes (adds or multiplies) together encrypted inputs.
Mixing operators cannot be fused to TLUs.
-
+
### method `__init__`
```python
-__init__(*args, rounding_threshold_bits: Optional[int] = None, **kwargs) → None
+__init__(
+ *args,
+ rounding_threshold_bits: Union[NoneType, int, Dict[str, Union[str, int]]] = None,
+ **kwargs
+) → None
```
Initialize quantized ops parameters plus specific parameters.
**Args:**
-- `rounding_threshold_bits` (Optional\[int\]): Number of bits to round to.
+- `rounding_threshold_bits` (Union\[None, int, Dict\[str, Union\[str, int\]\]\]): if not None, every accumulators in the model are rounded down to the given bits of precision. Can be an int or a dictionary with keys 'method' and 'n_bits', where 'method' is either fhe.Exactness.EXACT or fhe.Exactness.APPROXIMATE, and 'n_bits' is either 'auto' or an int.
- `*args`: positional argument to pass to the parent class.
- `**kwargs`: named argument to pass to the parent class.
@@ -549,7 +553,7 @@ Get the names of encrypted integer tensors that are used by this op.
______________________________________________________________________
-
+
### method `calibrate`
@@ -569,7 +573,7 @@ Create corresponding QuantizedArray for the output of the activation function.
______________________________________________________________________
-
+
### method `call_impl`
@@ -590,7 +594,7 @@ Call self.impl to centralize mypy bug workaround.
______________________________________________________________________
-
+
### method `can_fuse`
@@ -608,7 +612,7 @@ Mixing operations cannot be fused since it must be performed over integer tensor
______________________________________________________________________
-
+
### method `cnp_round`
@@ -634,7 +638,7 @@ Round the input array to the specified number of bits.
______________________________________________________________________
-
+
### method `dump`
@@ -650,7 +654,7 @@ Dump itself to a file.
______________________________________________________________________
-
+
### method `dump_dict`
@@ -666,7 +670,7 @@ Dump itself to a dict.
______________________________________________________________________
-
+
### method `dumps`
@@ -682,7 +686,7 @@ Dump itself to a string.
______________________________________________________________________
-
+
### method `load_dict`
@@ -702,7 +706,7 @@ Load itself from a string.
______________________________________________________________________
-
+
### method `make_output_quant_parameters`
@@ -728,7 +732,7 @@ Build a quantized array from quantized integer results of the op and quantizatio
______________________________________________________________________
-
+
### classmethod `must_quantize_input`
@@ -750,7 +754,7 @@ Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be eithe
______________________________________________________________________
-
+
### classmethod `op_type`
@@ -766,7 +770,7 @@ Get the type of this operation.
______________________________________________________________________
-
+
### method `prepare_output`
@@ -788,7 +792,7 @@ The calibrate method needs to be called with sample data before using this funct
______________________________________________________________________
-
+
### method `q_impl`
diff --git a/docs/references/api/concrete.ml.quantization.post_training.md b/docs/references/api/concrete.ml.quantization.post_training.md
index 84564e6c1..1834ab887 100644
--- a/docs/references/api/concrete.ml.quantization.post_training.md
+++ b/docs/references/api/concrete.ml.quantization.post_training.md
@@ -52,9 +52,9 @@ This class should be sub-classed to provide specific calibration and quantizatio
\- "op_inputs" and "op_weights" (mandatory)
\- "model_inputs" and "model_outputs" (optional, default to 5 bits). When using a single integer for n_bits, its value is assigned to "op_inputs" and "op_weights" bits. The maximum between this value and a default value (5) is then assigned to the number of "model_inputs" "model_outputs". This default value is a compromise between model accuracy and runtime performance in FHE. "model_outputs" gives the precision of the final network's outputs, while "model_inputs" gives the precision of the network's inputs. "op_inputs" and "op_weights" both control the quantization for inputs and weights of all layers.
- `numpy_model` (NumpyModule): Model in numpy.
-- `rounding_threshold_bits` (int): if not None, every accumulators in the model are rounded down to the given bits of precision
+- `rounding_threshold_bits` (Union\[None, int, Dict\[str, Union\[str, int\]\]\]): Defines precision rounding for model accumulators. Accepts None, an int, or a dict. The dict can specify 'method' (fhe.Exactness.EXACT or fhe.Exactness.APPROXIMATE) and 'n_bits' ('auto' or int)
-
+
### method `__init__`
@@ -62,7 +62,7 @@ This class should be sub-classed to provide specific calibration and quantizatio
__init__(
n_bits: Union[int, Dict],
numpy_model: NumpyModule,
- rounding_threshold_bits: Optional[int] = None
+ rounding_threshold_bits: Union[NoneType, int, Dict[str, Union[str, int]]] = None
)
```
@@ -108,7 +108,7 @@ Get the number of bits to use for the quantization of any constants (usually wei
______________________________________________________________________
-
+
### method `quantize_module`
@@ -130,7 +130,7 @@ Following https://arxiv.org/abs/1712.05877 guidelines.
______________________________________________________________________
-
+
## class `PostTrainingAffineQuantization`
@@ -146,14 +146,14 @@ Create the quantized version of the passed numpy module.
\- op_weights: learned parameters or constants in the network
\- model_outputs: final model output quantization bits
- `numpy_model` (NumpyModule): Model in numpy.
-- `rounding_threshold_bits` (int): if not None, every accumulators in the model are rounded down to the given bits of precision
+- `rounding_threshold_bits` (Union\[None, int, Dict\[str, Union\[str, int\]\]\]): if not None, every accumulators in the model are rounded down to the given bits of precision. Can be an int or a dictionary with keys 'method' and 'n_bits', where 'method' is either fhe.Exactness.EXACT or fhe.Exactness.APPROXIMATE, and 'n_bits' is either 'auto' or an int.
- `is_signed`: Whether the weights of the layers can be signed. Currently, only the weights can be signed.
**Returns:**
- `QuantizedModule`: A quantized version of the numpy model.
-
+
### method `__init__`
@@ -161,7 +161,7 @@ Create the quantized version of the passed numpy module.
__init__(
n_bits: Union[int, Dict],
numpy_model: NumpyModule,
- rounding_threshold_bits: Optional[int] = None
+ rounding_threshold_bits: Union[NoneType, int, Dict[str, Union[str, int]]] = None
)
```
@@ -207,7 +207,7 @@ Get the number of bits to use for the quantization of any constants (usually wei
______________________________________________________________________
-
+
### method `quantize_module`
@@ -229,7 +229,7 @@ Following https://arxiv.org/abs/1712.05877 guidelines.
______________________________________________________________________
-
+
## class `PostTrainingQATImporter`
@@ -237,7 +237,7 @@ Converter of Quantization Aware Training networks.
This class provides specific configuration for QAT networks during ONNX network conversion to Concrete ML computation graphs.
-
+
### method `__init__`
@@ -245,7 +245,7 @@ This class provides specific configuration for QAT networks during ONNX network
__init__(
n_bits: Union[int, Dict],
numpy_model: NumpyModule,
- rounding_threshold_bits: Optional[int] = None
+ rounding_threshold_bits: Union[NoneType, int, Dict[str, Union[str, int]]] = None
)
```
@@ -291,7 +291,7 @@ Get the number of bits to use for the quantization of any constants (usually wei
______________________________________________________________________
-
+
### method `quantize_module`
diff --git a/docs/references/api/concrete.ml.sklearn.base.md b/docs/references/api/concrete.ml.sklearn.base.md
index e03d0c2c7..ab242bd34 100644
--- a/docs/references/api/concrete.ml.sklearn.base.md
+++ b/docs/references/api/concrete.ml.sklearn.base.md
@@ -1408,7 +1408,7 @@ quantize_input(X: 'ndarray') → ndarray
______________________________________________________________________
-
+
## class `BaseTreeRegressorMixin`
@@ -1690,7 +1690,7 @@ quantize_input(X: 'ndarray') → ndarray
______________________________________________________________________
-
+
## class `BaseTreeClassifierMixin`
@@ -2022,7 +2022,7 @@ quantize_input(X: 'ndarray') → ndarray
______________________________________________________________________
-
+
## class `SklearnLinearModelMixin`
@@ -2030,7 +2030,7 @@ A Mixin class for sklearn linear models with FHE.
This class inherits from sklearn.base.BaseEstimator in order to have access to scikit-learn's `get_params` and `set_params` methods.
-
+
### method `__init__`
@@ -2158,7 +2158,7 @@ Compile the model.
______________________________________________________________________
-
+
### method `dequantize_output`
@@ -2216,7 +2216,7 @@ Dump itself to a string.
______________________________________________________________________
-
+
### method `fit`
@@ -2253,7 +2253,7 @@ The Concrete ML and float equivalent fitted estimators.
______________________________________________________________________
-
+
### classmethod `from_sklearn_model`
@@ -2370,7 +2370,7 @@ Predict values for X, in FHE or in the clear.
______________________________________________________________________
-
+
### method `quantize_input`
@@ -2380,7 +2380,7 @@ quantize_input(X: 'ndarray') → ndarray
______________________________________________________________________
-
+
## class `SklearnLinearRegressorMixin`
@@ -2388,7 +2388,7 @@ A Mixin class for sklearn linear regressors with FHE.
This class is used to create a linear regressor class that inherits from sklearn.base.RegressorMixin, which essentially gives access to scikit-learn's `score` method for regressors.
-
+
### method `__init__`
@@ -2516,7 +2516,7 @@ Compile the model.
______________________________________________________________________
-
+
### method `dequantize_output`
@@ -2574,7 +2574,7 @@ Dump itself to a string.
______________________________________________________________________
-
+
### method `fit`
@@ -2611,7 +2611,7 @@ The Concrete ML and float equivalent fitted estimators.
______________________________________________________________________
-
+
### classmethod `from_sklearn_model`
@@ -2728,7 +2728,7 @@ Predict values for X, in FHE or in the clear.
______________________________________________________________________
-
+
### method `quantize_input`
@@ -2738,7 +2738,7 @@ quantize_input(X: 'ndarray') → ndarray
______________________________________________________________________
-
+
## class `SklearnSGDRegressorMixin`
@@ -2746,7 +2746,7 @@ A Mixin class for sklearn SGD regressors with FHE.
This class is used to create a SGD regressor class what can be exported to ONNX using Hummingbird.
-
+
### method `__init__`
@@ -2874,7 +2874,7 @@ Compile the model.
______________________________________________________________________
-
+
### method `dequantize_output`
@@ -2932,7 +2932,7 @@ Dump itself to a string.
______________________________________________________________________
-
+
### method `fit`
@@ -2969,7 +2969,7 @@ The Concrete ML and float equivalent fitted estimators.
______________________________________________________________________
-
+
### classmethod `from_sklearn_model`
@@ -3086,7 +3086,7 @@ Predict values for X, in FHE or in the clear.
______________________________________________________________________
-
+
### method `quantize_input`
@@ -3096,7 +3096,7 @@ quantize_input(X: 'ndarray') → ndarray
______________________________________________________________________
-
+
## class `SklearnLinearClassifierMixin`
@@ -3106,7 +3106,7 @@ This class is used to create a linear classifier class that inherits from sklear
Additionally, this class adjusts some of the tree-based base class's methods in order to make them compliant with classification workflows.
-
+
### method `__init__`
@@ -3258,7 +3258,7 @@ Compile the model.
______________________________________________________________________
-
+
### method `decision_function`
@@ -3282,7 +3282,7 @@ Predict confidence scores.
______________________________________________________________________
-
+
### method `dequantize_output`
@@ -3377,7 +3377,7 @@ The Concrete ML and float equivalent fitted estimators.
______________________________________________________________________
-
+
### classmethod `from_sklearn_model`
@@ -3469,7 +3469,7 @@ predict(
______________________________________________________________________
-
+
### method `predict_proba`
@@ -3482,7 +3482,7 @@ predict_proba(
______________________________________________________________________
-
+
### method `quantize_input`
@@ -3492,7 +3492,7 @@ quantize_input(X: 'ndarray') → ndarray
______________________________________________________________________
-
+
## class `SklearnSGDClassifierMixin`
@@ -3500,7 +3500,7 @@ A Mixin class for sklearn SGD classifiers with FHE.
This class is used to create a SGD classifier class what can be exported to ONNX using Hummingbird.
-
+
### method `__init__`
@@ -3652,7 +3652,7 @@ Compile the model.
______________________________________________________________________
-
+
### method `decision_function`
@@ -3676,7 +3676,7 @@ Predict confidence scores.
______________________________________________________________________
-
+
### method `dequantize_output`
@@ -3771,7 +3771,7 @@ The Concrete ML and float equivalent fitted estimators.
______________________________________________________________________
-
+
### classmethod `from_sklearn_model`
@@ -3863,7 +3863,7 @@ predict(
______________________________________________________________________
-
+
### method `predict_proba`
@@ -3876,7 +3876,7 @@ predict_proba(
______________________________________________________________________
-
+
### method `quantize_input`
@@ -3886,7 +3886,7 @@ quantize_input(X: 'ndarray') → ndarray
______________________________________________________________________
-
+
## class `SklearnKNeighborsMixin`
@@ -3894,7 +3894,7 @@ A Mixin class for sklearn KNeighbors models with FHE.
This class inherits from sklearn.base.BaseEstimator in order to have access to scikit-learn's `get_params` and `set_params` methods.
-
+
### method `__init__`
@@ -4020,7 +4020,7 @@ Compile the model.
______________________________________________________________________
-
+
### method `dequantize_output`
@@ -4078,7 +4078,7 @@ Dump itself to a string.
______________________________________________________________________
-
+
### method `fit`
@@ -4137,7 +4137,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo
______________________________________________________________________
-
+
### method `get_topk_labels`
@@ -4181,7 +4181,7 @@ Load itself from a dict.
______________________________________________________________________
-
+
### method `majority_vote`
@@ -4201,7 +4201,7 @@ Determine the most common class among nearest neighborsfor each query.
______________________________________________________________________
-
+
### method `post_processing`
@@ -4223,7 +4223,7 @@ For KNN, the de-quantization step is not required. Because \_inference returns t
______________________________________________________________________
-
+
### method `predict`
@@ -4236,7 +4236,7 @@ predict(
______________________________________________________________________
-
+
### method `quantize_input`
@@ -4246,7 +4246,7 @@ quantize_input(X: 'ndarray') → ndarray
______________________________________________________________________
-
+
## class `SklearnKNeighborsClassifierMixin`
@@ -4254,7 +4254,7 @@ A Mixin class for sklearn KNeighbors classifiers with FHE.
This class is used to create a KNeighbors classifier class that inherits from SklearnKNeighborsMixin and sklearn.base.ClassifierMixin. By inheriting from sklearn.base.ClassifierMixin, it allows this class to be recognized as a classifier."
-
+
### method `__init__`
@@ -4380,7 +4380,7 @@ Compile the model.
______________________________________________________________________
-
+
### method `dequantize_output`
@@ -4438,7 +4438,7 @@ Dump itself to a string.
______________________________________________________________________
-
+
### method `fit`
@@ -4497,7 +4497,7 @@ This method is used to instantiate a scikit-learn model using the Concrete ML mo
______________________________________________________________________
-
+
### method `get_topk_labels`
@@ -4541,7 +4541,7 @@ Load itself from a dict.
______________________________________________________________________
-
+
### method `majority_vote`
@@ -4561,7 +4561,7 @@ Determine the most common class among nearest neighborsfor each query.
______________________________________________________________________
-
+
### method `post_processing`
@@ -4583,7 +4583,7 @@ For KNN, the de-quantization step is not required. Because \_inference returns t
______________________________________________________________________
-
+
### method `predict`
@@ -4596,7 +4596,7 @@ predict(
______________________________________________________________________
-
+
### method `quantize_input`
diff --git a/docs/references/api/concrete.ml.torch.compile.md b/docs/references/api/concrete.ml.torch.compile.md
index 64883b1a5..d31e1a404 100644
--- a/docs/references/api/concrete.ml.torch.compile.md
+++ b/docs/references/api/concrete.ml.torch.compile.md
@@ -67,7 +67,7 @@ build_quantized_module(
torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]],
import_qat: bool = False,
n_bits: Union[int, Dict[str, int]] = 8,
- rounding_threshold_bits: Optional[int] = None,
+ rounding_threshold_bits: Union[NoneType, int, Dict[str, Union[str, int]]] = None,
reduce_sum_copy=False
) → QuantizedModule
```
@@ -82,7 +82,7 @@ Take a model in torch or ONNX, turn it to numpy, quantize its inputs / weights /
- `torch_inputset` (Dataset): the calibration input-set, can contain either torch tensors or numpy.ndarray
- `import_qat` (bool): Flag to signal that the network being imported contains quantizers in in its computation graph and that Concrete ML should not re-quantize it
- `n_bits`: the number of bits for the quantization
-- `rounding_threshold_bits` (int): if not None, every accumulators in the model are rounded down to the given bits of precision
+- `rounding_threshold_bits` (Union\[None, int, Dict\[str, Union\[str, int\]\]\]): Defines precision rounding for model accumulators. Accepts None, an int, or a dict. The dict can specify 'method' (fhe.Exactness.EXACT or fhe.Exactness.APPROXIMATE) and 'n_bits' ('auto' or int)
- `reduce_sum_copy` (bool): if the inputs of QuantizedReduceSum should be copied to avoid bit-width propagation
**Returns:**
@@ -91,7 +91,7 @@ Take a model in torch or ONNX, turn it to numpy, quantize its inputs / weights /
______________________________________________________________________
-
+
## function `compile_torch_model`
@@ -104,7 +104,7 @@ compile_torch_model(
artifacts: Optional[DebugArtifacts] = None,
show_mlir: bool = False,
n_bits: Union[int, Dict[str, int]] = 8,
- rounding_threshold_bits: Optional[int] = None,
+ rounding_threshold_bits: Union[NoneType, int, Dict[str, Union[str, int]]] = None,
p_error: Optional[float] = None,
global_p_error: Optional[float] = None,
verbose: bool = False,
@@ -128,7 +128,7 @@ Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs
- `n_bits` (Union\[int, Dict\[str, int\]\]): number of bits for quantization, can be a single value or a dictionary with the following keys :
\- "op_inputs" and "op_weights" (mandatory)
\- "model_inputs" and "model_outputs" (optional, default to 5 bits). When using a single integer for n_bits, its value is assigned to "op_inputs" and "op_weights" bits. Default is 8 bits.
-- `rounding_threshold_bits` (int): if not None, every accumulators in the model are rounded down to the given bits of precision
+- `rounding_threshold_bits` (Union\[None, int, Dict\[str, Union\[str, int\]\]\]): Defines precision rounding for model accumulators. Accepts None, an int, or a dict. The dict can specify 'method' (fhe.Exactness.EXACT or fhe.Exactness.APPROXIMATE) and 'n_bits' ('auto' or int)
- `p_error` (Optional\[float\]): probability of error of a single PBS
- `global_p_error` (Optional\[float\]): probability of error of the full circuit. In FHE simulation `global_p_error` is set to 0
- `verbose` (bool): whether to show compilation information
@@ -141,7 +141,7 @@ Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs
______________________________________________________________________
-
+
## function `compile_onnx_model`
@@ -154,7 +154,7 @@ compile_onnx_model(
artifacts: Optional[DebugArtifacts] = None,
show_mlir: bool = False,
n_bits: Union[int, Dict[str, int]] = 8,
- rounding_threshold_bits: Optional[int] = None,
+ rounding_threshold_bits: Union[NoneType, int, Dict[str, Union[str, int]]] = None,
p_error: Optional[float] = None,
global_p_error: Optional[float] = None,
verbose: bool = False,
@@ -178,7 +178,7 @@ Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs
- `n_bits` (Union\[int, Dict\[str, int\]\]): number of bits for quantization, can be a single value or a dictionary with the following keys :
\- "op_inputs" and "op_weights" (mandatory)
\- "model_inputs" and "model_outputs" (optional, default to 5 bits). When using a single integer for n_bits, its value is assigned to "op_inputs" and "op_weights" bits. Default is 8 bits.
-- `rounding_threshold_bits` (int): if not None, every accumulators in the model are rounded down to the given bits of precision
+- `rounding_threshold_bits` (Union\[None, int, Dict\[str, Union\[str, int\]\]\]): Defines precision rounding for model accumulators. Accepts None, an int, or a dict. The dict can specify 'method' (fhe.Exactness.EXACT or fhe.Exactness.APPROXIMATE) and 'n_bits' ('auto' or int)
- `p_error` (Optional\[float\]): probability of error of a single PBS
- `global_p_error` (Optional\[float\]): probability of error of the full circuit. In FHE simulation `global_p_error` is set to 0
- `verbose` (bool): whether to show compilation information
@@ -191,7 +191,7 @@ Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs
______________________________________________________________________
-
+
## function `compile_brevitas_qat_model`
@@ -203,7 +203,7 @@ compile_brevitas_qat_model(
configuration: Optional[Configuration] = None,
artifacts: Optional[DebugArtifacts] = None,
show_mlir: bool = False,
- rounding_threshold_bits: Optional[int] = None,
+ rounding_threshold_bits: Union[NoneType, int, Dict[str, Union[str, int]]] = None,
p_error: Optional[float] = None,
global_p_error: Optional[float] = None,
output_onnx_file: Union[NoneType, Path, str] = None,
@@ -225,7 +225,7 @@ The torch_model parameter is a subclass of torch.nn.Module that uses quantized o
- `configuration` (Configuration): Configuration object to use during compilation
- `artifacts` (DebugArtifacts): Artifacts object to fill during compilation
- `show_mlir` (bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo
-- `rounding_threshold_bits` (int): if not None, every accumulators in the model are rounded down to the given bits of precision
+- `rounding_threshold_bits` (Union\[None, int, Dict\[str, Union\[str, int\]\]\]): Defines precision rounding for model accumulators. Accepts None, an int, or a dict. The dict can specify 'method' (fhe.Exactness.EXACT or fhe.Exactness.APPROXIMATE) and 'n_bits' ('auto' or int)
- `p_error` (Optional\[float\]): probability of error of a single PBS
- `global_p_error` (Optional\[float\]): probability of error of the full circuit. In FHE simulation `global_p_error` is set to 0
- `output_onnx_file` (str): temporary file to store ONNX model. If None a temporary file is generated
diff --git a/docs/references/api/concrete.ml.torch.hybrid_model.md b/docs/references/api/concrete.ml.torch.hybrid_model.md
index a3630ed87..5d6afe1de 100644
--- a/docs/references/api/concrete.ml.torch.hybrid_model.md
+++ b/docs/references/api/concrete.ml.torch.hybrid_model.md
@@ -12,7 +12,7 @@ Implement the conversion of a torch model to a hybrid fhe/torch inference.
______________________________________________________________________
-
+
## function `tuple_to_underscore_str`
@@ -32,7 +32,7 @@ Convert a tuple to a string representation.
______________________________________________________________________
-
+
## function `underscore_str_to_tuple`
@@ -52,7 +52,7 @@ Convert a a string representation of a tuple to a tuple.
______________________________________________________________________
-
+
## function `convert_conv1d_to_linear`
@@ -80,13 +80,13 @@ Simple enum for different modes of execution of HybridModel.
______________________________________________________________________
-
+
## class `RemoteModule`
A wrapper class for the modules to be evaluated remotely with FHE.
-
+
### method `__init__`
@@ -102,7 +102,7 @@ __init__(
______________________________________________________________________
-
+
### method `forward`
@@ -133,7 +133,7 @@ To change the behavior of this forward function one must change the fhe_local_mo
______________________________________________________________________
-
+
### method `init_fhe_client`
@@ -157,7 +157,7 @@ Set the clients keys.
______________________________________________________________________
-
+
### method `remote_call`
@@ -177,7 +177,7 @@ Call the remote server to get the private module inference.
______________________________________________________________________
-
+
## class `HybridFHEModel`
@@ -193,7 +193,7 @@ This is done by converting targeted modules by RemoteModules. This will modify t
- `model_name` (str): Model name identifier
- `verbose` (int): If logs should be printed when interacting with FHE server
-
+
### method `__init__`
@@ -209,7 +209,7 @@ __init__(
______________________________________________________________________
-
+
### method `compile_model`
@@ -235,7 +235,7 @@ Compiles the specific layers to FHE.
______________________________________________________________________
-
+
### method `init_client`
@@ -255,7 +255,7 @@ Initialize client for all remote modules.
______________________________________________________________________
-
+
### method `publish_to_hub`
@@ -267,7 +267,7 @@ Allow the user to push the model and FHE required files to HF Hub.
______________________________________________________________________
-
+
### method `save_and_clear_private_info`
@@ -284,7 +284,7 @@ Save the PyTorch model to the provided path and also saves the corresponding FHE
______________________________________________________________________
-
+
### method `set_fhe_mode`
@@ -300,7 +300,7 @@ Set Hybrid FHE mode for all remote modules.
______________________________________________________________________
-
+
## class `LoggerStub`
@@ -308,7 +308,7 @@ Placeholder type for a typical logger like the one from loguru.
______________________________________________________________________
-
+
### method `info`
@@ -324,7 +324,7 @@ Placholder function for logger.info.
______________________________________________________________________
-
+
## class `HybridFHEModelServer`
@@ -332,7 +332,7 @@ Hybrid FHE Model Server.
This is a class object to server FHE models serialized using HybridFHEModel.
-
+
### method `__init__`
@@ -342,7 +342,7 @@ __init__(key_path: Path, model_dir: Path, logger: Optional[LoggerStub])
______________________________________________________________________
-
+
### method `add_key`
@@ -365,7 +365,7 @@ Dict\[str, str\]
______________________________________________________________________
-
+
### method `check_inputs`
@@ -391,7 +391,7 @@ Check that the given configuration exist in the compiled models folder.
______________________________________________________________________
-
+
### method `compute`
@@ -421,7 +421,7 @@ Compute the circuit over encrypted input.
______________________________________________________________________
-
+
### method `dump_key`
@@ -438,7 +438,7 @@ Dump a public key to a stream.
______________________________________________________________________
-
+
### method `get_circuit`
@@ -460,7 +460,7 @@ Get circuit based on model name, module name and input shape.
______________________________________________________________________
-
+
### method `get_client`
@@ -486,7 +486,7 @@ Get client.
______________________________________________________________________
-
+
### method `list_modules`
@@ -505,7 +505,7 @@ Dict\[str, Dict\[str, Dict\]\]
______________________________________________________________________
-
+
### method `list_shapes`
@@ -525,7 +525,7 @@ Dict\[str, Dict\]
______________________________________________________________________
-
+
### method `load_key`
diff --git a/docs/references/api/concrete.ml.torch.numpy_module.md b/docs/references/api/concrete.ml.torch.numpy_module.md
index 5c5e546e3..4b622c953 100644
--- a/docs/references/api/concrete.ml.torch.numpy_module.md
+++ b/docs/references/api/concrete.ml.torch.numpy_module.md
@@ -32,7 +32,7 @@ General interface to transform a torch.nn.Module to numpy module.
__init__(
model: Union[Module, ModelProto],
dummy_input: Optional[Tensor, Tuple[Tensor, ]] = None,
- debug_onnx_output_file_path: Optional[str, Path] = None
+ debug_onnx_output_file_path: Optional[Path, str] = None
)
```
diff --git a/docs/built-in-models/pandas.md b/docs/references/pandas.md
similarity index 100%
rename from docs/built-in-models/pandas.md
rename to docs/references/pandas.md