Implement sklearn linear model.
A linear regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on LinearRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
__init__(
n_bits=8,
fit_intercept=True,
normalize='deprecated',
copy_X=True,
n_jobs=None,
positive=False
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/get-started/terminology) Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
An FHE linear classifier model fitted with stochastic gradient descent.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.fit_encrypted
(bool): Indicate if the model should be fitted in FHE or not. Default to False.parameters_range
(Optional[Tuple[float, float]]): Range of values to consider for the model's parameters when compiling it after training it in FHE (if fit_encrypted is set to True). Default to None.batch_size
(int): Batch size to consider for the gradient descent during FHE training (if fit_encrypted is set to True). Default to 8.
For more details on SGDClassifier please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html
__init__(
n_bits=8,
fit_encrypted=False,
parameters_range=None,
loss='log_loss',
penalty='l2',
alpha=0.0001,
l1_ratio=0.15,
fit_intercept=True,
max_iter: int = 1000,
tol=0.001,
shuffle=True,
verbose=0,
epsilon=0.1,
n_jobs=None,
random_state=None,
learning_rate='optimal',
eta0=0.0,
power_t=0.5,
early_stopping=False,
validation_fraction=0.1,
n_iter_no_change=5,
class_weight=None,
warm_start=False,
average=False
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/get-started/terminology) Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the model's number of classes.
Using this attribute is deprecated.
Returns:
int
: The model's number of classes.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
Get the model's classes.
Using this attribute is deprecated.
Returns:
Optional[numpy.ndarray]
: The model's classes.
dump_dict() → Dict[str, Any]
fit(
X: Union[ndarray, Tensor, ForwardRef('DataFrame'), List],
y: Union[ndarray, Tensor, ForwardRef('DataFrame'), ForwardRef('Series'), List],
fhe: Optional[str, FheMode] = None,
coef_init: Optional[ndarray] = None,
intercept_init: Optional[ndarray] = None,
sample_weight: Optional[ndarray] = None,
device: str = 'cpu'
)
Fit SGDClassifier.
For more details on some of these arguments please refer to: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html Training with encrypted data differs a bit from what is done by scikit-learn on multiple points:
- The learning rate used is constant (self.learning_rate_value)
- There is a batch size, it does not use the full dataset (self.batch_size)
Args:
X
(Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.y
(Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.fhe
(Optional[Union[str, FheMode]]): The mode to use for FHE training. Can be FheMode.DISABLE for Concrete ML Python (quantized) training, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. If None, training is done in floating points in the clear through scikit-learn. Default to None.coef_init
(Optional[numpy.ndarray]): The initial coefficients to warm-start the optimization. Default to None.intercept_init
(Optional[numpy.ndarray]): The initial intercept to warm-start the optimization. Default to None.sample_weight
(Optional[numpy.ndarray]): Weights applied to individual samples (1. for unweighted). It is currently not supported for FHE training. Default to None.device
: FHE compilation device, can be either 'cpu' or 'cuda'.
Returns: The fitted estimator.
Raises:
ValueError
: iffhe
is provided butfit_encrypted==False
NotImplementedError
: If parameter a 'sample_weight' is given while FHE training is enabled.
get_sklearn_params(deep: bool = True) → dict
load_dict(metadata: Dict)
partial_fit(
X: ndarray,
y: ndarray,
fhe: Optional[str, FheMode] = None,
classes=None
)
Fit SGDClassifier for a single iteration.
This function does one iteration of SGD training. Looping n_times over this function is equivalent to calling 'fit' with max_iter=n_times.
Args:
X
(Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.y
(Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.fhe
(Optional[Union[str, FheMode]]): The mode to use for FHE training. Can be FheMode.DISABLE for Concrete ML Python (quantized) training, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. If None, training is done in floating points in the clear through scikit-learn. Default to None.classes
(Optional[numpy.ndarray]): The classes in the dataset. It needs to be provided in the first call topartial_fit
. If provided in following calls it should match the classes provided in the first call
Raises:
NotImplementedError
: If FHE training is disabled.
post_processing(y_preds: ndarray) → ndarray
Apply post-processing to the de-quantized predictions.
This is called at the end of the predict_proba
method and is only available for log loss and modified Huber losses. Multiclass probability estimates are derived from binary (one-vs.-rest) estimates by simple normalization, as recommended by Zadrozny and Elkan.
Binary probability estimates for loss="modified_huber" are given by (clip(decision_function(X), -1, 1) + 1) / 2. For other loss functions it is necessary to perform proper probability calibration by wrapping the classifier with sklearn.calibration.CalibratedClassifierCV
instead.
Args:
y_preds
(Data): The de-quantized predictions to post-process. It mush have a shape of (n_samples, n_features).
Returns:
numpy.ndarray
: The post-processed predictions, with shape (n_samples, n_classes).
Raises:
NotImplementedError
: If the given loss is not supported.
References: Zadrozny and Elkan, "Transforming classifier scores into multiclass probability estimates", SIGKDD'02,
https
: //dl.acm.org/doi/pdf/10.1145/775047.775151
The justification for the formula in the loss="modified_huber" case is in the appendix B in:
http
: //jmlr.csail.mit.edu/papers/volume2/zhang02c/zhang02c.pdf
An FHE linear regression model fitted with stochastic gradient descent.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on SGDRegressor please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDRegressor.html
__init__(
n_bits=8,
loss='squared_error',
penalty='l2',
alpha=0.0001,
l1_ratio=0.15,
fit_intercept=True,
max_iter=1000,
tol=0.001,
shuffle=True,
verbose=0,
epsilon=0.1,
random_state=None,
learning_rate='invscaling',
eta0=0.01,
power_t=0.25,
early_stopping=False,
validation_fraction=0.1,
n_iter_no_change=5,
warm_start=False,
average=False
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/get-started/terminology) Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
An ElasticNet regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on ElasticNet please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html
__init__(
n_bits=8,
alpha=1.0,
l1_ratio=0.5,
fit_intercept=True,
normalize='deprecated',
precompute=False,
max_iter=1000,
copy_X=True,
tol=0.0001,
warm_start=False,
positive=False,
random_state=None,
selection='cyclic'
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/get-started/terminology) Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
A Lasso regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on Lasso please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html
__init__(
n_bits=8,
alpha: float = 1.0,
fit_intercept=True,
normalize='deprecated',
precompute=False,
copy_X=True,
max_iter=1000,
tol=0.0001,
warm_start=False,
positive=False,
random_state=None,
selection='cyclic'
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/get-started/terminology) Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
A Ridge regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on Ridge please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html
__init__(
n_bits=8,
alpha: float = 1.0,
fit_intercept=True,
normalize='deprecated',
copy_X=True,
max_iter=None,
tol=0.001,
solver='auto',
positive=False,
random_state=None
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/get-started/terminology) Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
A logistic regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on LogisticRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
__init__(
n_bits=8,
penalty='l2',
dual=False,
tol=0.0001,
C=1.0,
fit_intercept=True,
intercept_scaling=1,
class_weight=None,
random_state=None,
solver='lbfgs',
max_iter=100,
multi_class='auto',
verbose=0,
warm_start=False,
n_jobs=None,
l1_ratio=None
)
Get the FHE circuit.
The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/get-started/terminology) Is None if the model is not fitted.
Returns:
Circuit
: The FHE circuit.
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
Get the model's number of classes.
Using this attribute is deprecated.
Returns:
int
: The model's number of classes.
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
Get the model's classes.
Using this attribute is deprecated.
Returns:
Optional[numpy.ndarray]
: The model's classes.
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)