From f0cfc5416537d41b697faf84b754c5e6a30d2407 Mon Sep 17 00:00:00 2001 From: Raphael Doukhan Date: Sat, 19 Aug 2023 08:36:25 +0000 Subject: [PATCH 01/30] GITBOOK-23: update lr tuto --- ...fiable-linear-regression-model-in-orion.md | 62 +++++++++---------- 1 file changed, 29 insertions(+), 33 deletions(-) diff --git a/docs/academy/tutorials/verifiable-linear-regression-model-in-orion.md b/docs/academy/tutorials/verifiable-linear-regression-model-in-orion.md index e2e993e7c..63c5d14d6 100644 --- a/docs/academy/tutorials/verifiable-linear-regression-model-in-orion.md +++ b/docs/academy/tutorials/verifiable-linear-regression-model-in-orion.md @@ -14,7 +14,7 @@ Content overview: 2. [Transitioning to Cairo:](verifiable-linear-regression-model-in-orion.md#transitioning-to-cairo) In the subsequent stage, we will create a new scarb project and replicate our model to Cairo which is a language for creating STARK-provable programs. 3. [Implementing OLS functions using Orion:](verifiable-linear-regression-model-in-orion.md#implementing-ols-functions-using-orion) To catalyse our development process we will utilise the Orion Framework to construct the OLS functions to build our Verifiable Linear Regression model. -## Simple Linear Regression with Python +### Simple Linear Regression with Python A Regression model is a foundational technique used to determine the relationship between independent variables (predictors) and dependent variables (outcome). This relationship is typically represented by a straight line and is often termed the “line of best fit”. By mapping how variations in one variable **X** may influence changes in another variable **y**, we can make highly accurate predictions on new unseen data points. The mathematical representation of this linear relationship is given by the equation: @@ -22,9 +22,9 @@ $$ y = a + bX \quad \quad \begin{align*} b & \text{= gradient (slope of the line)} \\ a & \text{= intercept (value of } y \text{ when } X \text{ is zero)} \\ y & \text{= y values} \\ X & \text{= x values} \\ \end{align*} $$ -### Generating the dataset +#### Generating the dataset -In the following [notebook](https://github.com/BemTG/Verifiable-Linear-Regression-), we will create a synthetic dataset that will serve as the backbone throughout our tutorial. +In the following [notebook](https://github.com/gizatechxyz/orion\_tutorials/tree/main/verifiable\_linear\_regression\_model), we will create a synthetic dataset that will serve as the backbone throughout our tutorial. ```python import numpy as np @@ -35,8 +35,8 @@ import matplotlib.pyplot as plt SEED = 42 np.random.seed(SEED) # Generate 150 values for X and y -X = np.linspace(-10, 25, 150).astype('int8') -noise = np.random.normal(0, 10, len(X)).astype('int8') +X = np.linspace(-0.5, 0.5, 150).astype('float64') +noise = np.random.normal(0, 0.1, len(X)).astype('float64') ## main equation for generating the dataset y = 2 * X + 5 + noise # y=2x+5 + noise @@ -47,11 +47,9 @@ plt.ylabel('y values') ``` -
- Upon inspecting the plot, it is readily apparent that there exists a positive correlation between the X and y values, consistent with our underlying equation. Our goal in this tutorial is to quantify this relationship using a regression model, using only the data points provided. By utilizing the Ordinary Least Square (OLS) method, we aim to derive a linear equation that closely approximates the original equation from which the dataset was generated from: `y = 2 * X + 5 + noise` -### Computing the gradient of the line +#### Computing the gradient of the line OLS method can help us decipher the linear relationship between the X and y variables by calculating the **gradient (beta)** and corresponding **y intercept (a)** to find the optimal "line of best fit". @@ -75,7 +73,7 @@ print('The slope of regression line:', beta) ``` -### Computing the y-intercept +#### Computing the y-intercept Having determined the **beta** value, our next step is to calculate the **y-intercept**. This can be achieved by substituting the known **beta**, **y mean**, and **X mean** values into our line equation. The rationale behind using the **y mean** and **X mean** is grounded on the principle that the "line of best fit" must intersect these central points. @@ -90,7 +88,7 @@ print('The y intercept of our regression line:', intercept) plt.scatter(X, y, label='Data Points') plt.plot(X, beta * X + intercept, color='red', label='Regression Line') -plt.scatter(17,predicted_y_value, color='green', label='pred for x = 17 ') +plt.scatter(0.17,predicted_y_value, color='green', label='pred for x = 0.17 ') plt.xlabel('X') plt.ylabel('y') plt.title('Linear Regression') @@ -104,11 +102,9 @@ print(f"Calculated intercept: {intercept}") ``` -
- Looking at the above plot we can see we have successfully implemented our Linear regression model and captured the “line of best fit” using the OLS method. -### Model accuracy +#### Model accuracy To assess the efficacy of our regression model, we compute the **mse** and **r\_squared\_score** values, which yield an R-squared score of 0.83, indicating a robust predictive performance for the model. @@ -127,13 +123,13 @@ print("R-squared (R^2):", r_squared) ``` -## Transitioning to Cairo +### Transitioning to Cairo Now that we have a good understanding of the OLS functions used, we will replicate the full linear regression model in Cairo to turn it to a fully verifiable model. Since we will be rebuilding the model from scratch, this will serve as a good opportunity to get familiar with Orion’s built-in functions and operators making the transition to Cairo seamless. -### Creating a new scarb project +#### Creating a new scarb project -Scarb is the Cairo package manager specifically created to streamline our Cairo and Starknet development process. Scarb will typically manage project dependencies, the compilation process (both pure Cairo and Starknet contracts), downloading and building external libraries to accelerate our development with Orion.You can find all information about Scarb and Cairo installation [here](../../framework/get-started.md#installations). +Scarb is the Cairo package manager specifically created to streamline our Cairo and Starknet development process. Scarb will typically manage project dependencies, the compilation process (both pure Cairo and Starknet contracts), downloading and building external libraries to accelerate our development with Orion.You can find all information about Scarb and Cairo installation here. To create a new Scarb project, open your terminal and run: @@ -150,14 +146,14 @@ name = "verifiable_linear_regression" version = "0.1.0" [dependencies] -orion = { git = "https://github.com/gizatechxyz/orion.git", branch = "develop" } +orion = { git = "https://github.com/gizatechxyz/orion.git", branch = "einsum-impl" } [scripts] test = "scarb cairo-test -f linear_regression_test" ``` -### Gerating the dataset in Cairo +#### Gerating the dataset in Cairo Now let’s generate the files required to begin our transition to Cairo. In our Jupyter Notebook, we will execute the code required to turn our synthetic dataset to fixed point values and represent our X and y values as Fixedpoint Tensors in Orion. @@ -171,10 +167,8 @@ def generate_cairo_files(data, name): f.write( "use array::ArrayTrait;\n" + "use orion::operators::tensor::core::{TensorTrait, Tensor, ExtraParams};\n" + - "use orion::operators::tensor::implementations::impl_tensor_i32::Tensor_i32;\n" + - "use orion::numbers::signed_integer::i32::i32;\n\n" + + "use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp;\n" + "use orion::numbers::fixed_point::core::{FixedTrait, FixedType, FixedImpl};\n" - "use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp;\n" "use orion::numbers::fixed_point::implementations::fp16x16::core::{FP16x16Impl, FP16x16PartialEq };\n"+ "fn {0}() -> Tensor ".format(name) + "{\n" + " let mut shape = ArrayTrait::new();\n" @@ -185,7 +179,7 @@ def generate_cairo_files(data, name): " let mut data = ArrayTrait::new();\n" ) for val in np.nditer(data.flatten()): - f.write(" data.append(FixedTrait::new_unscaled({0}, {1} ));\n".format(abs(int(val)), str(val < 0).lower())) + f.write(" data.append(FixedTrait::new({0}, {1} ));\n".format(abs(int(val * 2**16)), str(val < 0).lower())) f.write( "let extra = ExtraParams { fixed_point: Option::Some(FixedImpl::FP16x16(())) }; \n" + "let tensor = TensorTrait::::new(shape.span(), data.span(), Option::Some(extra)); \n \n" + @@ -216,13 +210,13 @@ This will tell our compiler to include the separate modules listed above during ```rust use array::ArrayTrait; - use orion::operators::tensor::core::{TensorTrait, Tensor, ExtraParams}; use orion::operators::tensor::implementations::impl_tensor_i32::Tensor_i32; use orion::numbers::signed_integer::i32::i32; + use orion::numbers::fixed_point::core::{FixedTrait, FixedType, FixedImpl}; use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; -use orion::numbers::fixed_point::implementations::fp16x16::core::{FP16x16Impl, FP16x16PartialEq }; +use orion::numbers::fixed_point::implementations::fp16x16::core::{FP16x16Impl, FP16x16Into, FP16x16PartialEq }; fn X_values() -> Tensor { let mut shape = ArrayTrait::new(); @@ -241,6 +235,7 @@ let tensor = TensorTrait::::new(shape.span(), data.span(), Option::So return tensor; } + ``` Since Cairo does not come with built-in signed integers we have to explicitly define it for our X and y values. Luckily, this is already implemented in Orion for us as a struct as shown below: @@ -248,7 +243,7 @@ Since Cairo does not come with built-in signed integers we have to explicitly de ```rust // Example of a FixedType. struct FixedType { - mag: u32, + mag: u128, sign: bool } @@ -276,11 +271,11 @@ let extra = ExtraParams { fixed_point: Option::Some(FixedImpl::FP16x16(())) }; ``` -## Implementing OLS functions using Orion +### Implementing OLS functions using Orion At this stage, we will be reproducing the OLS functions now that we have generated our X and Y Fixedpoint Tensors. We will begin by creating a separate file for our linear regression functions file named `lin_reg_func.cairo` to host all of our linear regression functions. -### Computing the mean +#### Computing the mean ```rust fn calculate_mean(tensor_data: Tensor) -> FixedType { @@ -293,11 +288,12 @@ fn calculate_mean(tensor_data: Tensor) -> FixedType { return mean; } + ``` The above function takes in a FixedType Tensor and computes its corresponding mean value. We break the steps down by first calculating the cumulative sum of the tensor values using the `cumsum` built-in orion operator. We then divide the result by the length of the tensor size and return the output as a Fixedtype number. -### Computing the deviation from the mean +#### Computing the deviation from the mean ```rust fn deviation_from_mean(tensor_data: Tensor ) -> Tensor { @@ -328,7 +324,7 @@ fn deviation_from_mean(tensor_data: Tensor ) -> Tensor { The following deviation\_from\_mean function calculates the deviation from the mean for each element of a given tensor. We initially calculate the tensor's mean value and store it under the variable mean\_value. We then create a for loop to iterate over each element in the tensor values and calculate the deviation from the mean which we will append the result to `deviation_values` array. Finally, we create a new tensor named distance\_from\_mean\_tensor by passing the deviation\_values array and the tensor shape. -### Computing the gradient value +#### Computing the gradient value The OLS gradient (beta) formula: @@ -355,7 +351,7 @@ fn compute_beta(x_values: Tensor, y_values: Tensor ) -> Fi We can now compute the beta value for our linear regression utilising the previous deviation\_from\_mean function. We first calculate both the deviation of x values and y values from the mean and store them in separate variables as tensors. To calculate the covariance, we use the built-in Orion `matmul` operator to multiply x\_deviation by y\_deviation tensors. Similarly, we compute the X variance by multiplying x\_deviation tensor by itself. Finally, we divide the `x_y_covariance` by the `x_variance` to get an approximate gradient value for our regression model. -### Computing the y-intercept +#### Computing the y-intercept ```rust /// Calculates the intercept for linear regression. @@ -374,7 +370,7 @@ fn compute_intercept(beta_value:FixedType, x_values: Tensor, y_values Calculating the y-intercept is fairly simple, we just need to substitute the calculated beta, y\_mean and x\_mean values and rearrange for the intercept value as previously shown in the Python implementation section. -### Testing the model +#### Testing the model Now that we have implemented all the necessary functions for the OLS method, we can finally test our linear regression model. We begin by creating a new separate test file named `test.cairo` and import all the necessary Orion libraries including our `X_values` and `y_values` found in the generated folder. We also import all the OLS functions from `lin_reg_func.cairo` file as we will be relying upon them to construct the regression model. @@ -421,11 +417,11 @@ fn linear_regression_test() { let mse = compute_mse(y_values, y_pred); // mse.print(); // mean squared error ouput let r_score = calculate_r_score(y_values, y_pred); - // r_score.print(); // accuracy of model 0.8303375244140625 + // r_score.print(); // accuracy of model 0.97494506835 assert(beta_value.mag > 0, 'x & y not positively correlated'); assert(r_score.mag > 0, 'R-Squared needs to be above 0'); - assert(r_score.mag < 62259, 'R-Squared has to be below 65536'); // 65536 represents ONE in fp16x16. + assert(r_score.mag < 65536, 'R-Squared has to be below 65536'); // 65536 represents ONE in fp16x16. assert(r_score.mag > 32768, 'Accuracy below 50% '); } From 7e116a950f343e331e89dbe774c2f46591cbe35a Mon Sep 17 00:00:00 2001 From: Raphael Doukhan Date: Sat, 19 Aug 2023 08:41:31 +0000 Subject: [PATCH 02/30] GITBOOK-24: change request with no subject merged in GitBook --- .../verifiable-linear-regression-model-in-orion.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/docs/academy/tutorials/verifiable-linear-regression-model-in-orion.md b/docs/academy/tutorials/verifiable-linear-regression-model-in-orion.md index 63c5d14d6..e2c475a76 100644 --- a/docs/academy/tutorials/verifiable-linear-regression-model-in-orion.md +++ b/docs/academy/tutorials/verifiable-linear-regression-model-in-orion.md @@ -69,8 +69,7 @@ denominator = sum((X - X.mean())**2) beta = numerator / denominator print('The slope of regression line:', beta) ->> The slope of regression line: 2.0315325245038856 - +>> The slope of regression line: 2.0133337976122685 ``` #### Computing the y-intercept @@ -97,8 +96,8 @@ plt.grid(True) plt.show() print(f"Calculated beta: {beta}") print(f"Calculated intercept: {intercept}") ->> Calculated beta: 2.0315325245038856 ->> Calculated intercept: 3.916899671448352 +>> Calculated beta: 2.0133337976122685 +>> Calculated intercept: 4.991767313284746 ``` @@ -118,8 +117,8 @@ r_squared = 1 - np.sum((y - y_pred)**2) / np.sum((y - y_mean)**2) print("Mean Squared Error (MSE):", mse) print("R-squared (R^2):", r_squared) ->>Mean Squared Error (MSE): 81.78873049706822 ->>R-squared (R^2): 0.8303237877258618 +>> Mean Squared Error (MSE): 0.008805873341370826 +>> R-squared (R^2): 0.974921526753728 ``` From 9396deef28ea6bfab60bd85e7ea6063922e1e267 Mon Sep 17 00:00:00 2001 From: Raphael Doukhan Date: Mon, 21 Aug 2023 10:39:56 +0000 Subject: [PATCH 03/30] GITBOOK-25: change request with no subject merged in GitBook --- docs/hub/algorithms.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/hub/algorithms.md b/docs/hub/algorithms.md index c4e9d8ba2..d20293cb6 100644 --- a/docs/hub/algorithms.md +++ b/docs/hub/algorithms.md @@ -2,6 +2,6 @@ Discover amazing ML models made by the community with Orion! -
Verifiable-Linear-Regression
August 15 - Bem Baraki
MNIST Classification with Feedforward Neural Network June 8, 2023 - Raphael Doukhan
+
Verifiable-Logistic-Regression

August 20 - Bowen

Verifiable-Linear-Regression
August 15 - Bem Baraki
MNIST Classification with Feedforward Neural Network June 8, 2023 - Raphael Doukhan
We encourage you to contribute your own implementations and help grow this collection, benefiting the community and driving innovation in verifiable ML model inference. From 29cbeada18a9a49fbf3289d36b6f86f5a3e3e0ba Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Wed, 23 Aug 2023 10:23:16 +0300 Subject: [PATCH 04/30] refactor abs acos acosh --- .../implementations/impl_tensor_fp.cairo | 6 +++--- .../implementations/impl_tensor_i32.cairo | 2 +- .../implementations/impl_tensor_i8.cairo | 2 +- src/operators/tensor/math/abs.cairo | 2 +- .../tensor/math/abs/abs_fp/core.cairo | 4 ++-- .../tensor/math/abs/abs_fp/fp16x16.cairo | 18 +++++++++-------- .../tensor/math/abs/abs_fp/fp8x23.cairo | 17 ++++++++-------- src/operators/tensor/math/abs/abs_i32.cairo | 17 ++++++++-------- src/operators/tensor/math/abs/abs_i8.cairo | 17 ++++++++-------- .../tensor/math/acos/acos_fp/core.cairo | 4 ++-- .../tensor/math/acos/acos_fp/fp16x16.cairo | 19 +++++++++--------- .../tensor/math/acos/acos_fp/fp8x23.cairo | 20 +++++++++---------- .../tensor/math/acosh/acosh_fp/core.cairo | 8 ++++---- .../tensor/math/acosh/acosh_fp/fp16x16.cairo | 17 ++++++++-------- .../tensor/math/acosh/acosh_fp/fp8x23.cairo | 17 ++++++++-------- .../tensor/math/acosh/acosh_i32/core.cairo | 8 ++++---- .../tensor/math/acosh/acosh_i32/fp16x16.cairo | 18 ++++++++--------- .../tensor/math/acosh/acosh_i32/fp8x23.cairo | 20 +++++++++---------- .../tensor/math/acosh/acosh_i8/core.cairo | 8 ++++---- .../tensor/math/acosh/acosh_i8/fp16x16.cairo | 18 ++++++++--------- .../tensor/math/acosh/acosh_i8/fp8x23.cairo | 18 ++++++++--------- .../tensor/math/acosh/acosh_u32/core.cairo | 8 ++++---- .../tensor/math/acosh/acosh_u32/fp16x16.cairo | 18 ++++++++--------- .../tensor/math/acosh/acosh_u32/fp8x23.cairo | 19 +++++++++--------- 24 files changed, 156 insertions(+), 149 deletions(-) diff --git a/src/operators/tensor/implementations/impl_tensor_fp.cairo b/src/operators/tensor/implementations/impl_tensor_fp.cairo index 1e9b70f06..bf8399ef6 100644 --- a/src/operators/tensor/implementations/impl_tensor_fp.cairo +++ b/src/operators/tensor/implementations/impl_tensor_fp.cairo @@ -138,7 +138,7 @@ impl Tensor_fp of TensorTrait { } fn abs(self: @Tensor) -> Tensor { - abs(self).unwrap() + abs(*self).unwrap() } fn ceil(self: @Tensor) -> Tensor { @@ -200,9 +200,9 @@ impl Tensor_fp of TensorTrait { } fn acos(self: @Tensor) -> Tensor { - acos(self).unwrap() + acos(*self).unwrap() } - + fn onehot( self: @Tensor, depth: usize, axis: Option, values: Span ) -> Tensor { diff --git a/src/operators/tensor/implementations/impl_tensor_i32.cairo b/src/operators/tensor/implementations/impl_tensor_i32.cairo index 19bc70c89..8a2f2a545 100644 --- a/src/operators/tensor/implementations/impl_tensor_i32.cairo +++ b/src/operators/tensor/implementations/impl_tensor_i32.cairo @@ -128,7 +128,7 @@ impl Tensor_i32 of TensorTrait { } fn abs(self: @Tensor) -> Tensor { - abs(self) + abs(*self) } fn ceil(self: @Tensor) -> Tensor { diff --git a/src/operators/tensor/implementations/impl_tensor_i8.cairo b/src/operators/tensor/implementations/impl_tensor_i8.cairo index b1af718c5..fceb99b33 100644 --- a/src/operators/tensor/implementations/impl_tensor_i8.cairo +++ b/src/operators/tensor/implementations/impl_tensor_i8.cairo @@ -131,7 +131,7 @@ impl Tensor_i8 of TensorTrait { } fn abs(self: @Tensor) -> Tensor { - abs(self) + abs(*self) } fn ceil(self: @Tensor) -> Tensor { diff --git a/src/operators/tensor/math/abs.cairo b/src/operators/tensor/math/abs.cairo index 472b62081..d56ebb732 100644 --- a/src/operators/tensor/math/abs.cairo +++ b/src/operators/tensor/math/abs.cairo @@ -1,4 +1,4 @@ mod abs_i8; mod abs_i32; mod abs_u32; -mod abs_fp; +mod abs_fp; \ No newline at end of file diff --git a/src/operators/tensor/math/abs/abs_fp/core.cairo b/src/operators/tensor/math/abs/abs_fp/core.cairo index f7c8b13e7..b30f1c412 100644 --- a/src/operators/tensor/math/abs/abs_fp/core.cairo +++ b/src/operators/tensor/math/abs/abs_fp/core.cairo @@ -6,8 +6,8 @@ use orion::operators::tensor::math::abs::abs_fp::fp16x16; /// Cf: TensorTrait::abs docstring -fn abs(z: @Tensor) -> Option> { - match *z.extra { +fn abs(z: Tensor) -> Option> { + match z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { FixedImpl::FP8x23(()) => Option::Some(fp8x23::abs(z)), diff --git a/src/operators/tensor/math/abs/abs_fp/fp16x16.cairo b/src/operators/tensor/math/abs/abs_fp/fp16x16.cairo index 46d03d59c..522ad6914 100644 --- a/src/operators/tensor/math/abs/abs_fp/fp16x16.cairo +++ b/src/operators/tensor/math/abs/abs_fp/fp16x16.cairo @@ -9,17 +9,19 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: TensorTrait::abs docstring -fn abs(z: @Tensor) -> Tensor { +fn abs(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + data_result.append((*item).abs()); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - data_result.append(current_index.abs()); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::new(z.shape, data_result.span(), z.extra); } + diff --git a/src/operators/tensor/math/abs/abs_fp/fp8x23.cairo b/src/operators/tensor/math/abs/abs_fp/fp8x23.cairo index 73395c22b..ca4cd8c60 100644 --- a/src/operators/tensor/math/abs/abs_fp/fp8x23.cairo +++ b/src/operators/tensor/math/abs/abs_fp/fp8x23.cairo @@ -9,17 +9,18 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: TensorTrait::abs docstring -fn abs(z: @Tensor) -> Tensor { +fn abs(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + data_result.append((*item).abs()); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - data_result.append(current_index.abs()); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/tensor/math/abs/abs_i32.cairo b/src/operators/tensor/math/abs/abs_i32.cairo index 383788eed..e7cf77f13 100644 --- a/src/operators/tensor/math/abs/abs_i32.cairo +++ b/src/operators/tensor/math/abs/abs_i32.cairo @@ -7,17 +7,18 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: TensorTrait::abs docstring -fn abs(z: @Tensor) -> Tensor { +fn abs(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + data_result.append((*item).abs()); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - data_result.append(current_index.abs()); }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/tensor/math/abs/abs_i8.cairo b/src/operators/tensor/math/abs/abs_i8.cairo index 3a356eb40..1ab947f45 100644 --- a/src/operators/tensor/math/abs/abs_i8.cairo +++ b/src/operators/tensor/math/abs/abs_i8.cairo @@ -7,17 +7,18 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: TensorTrait::abs docstring -fn abs(z: @Tensor) -> Tensor { +fn abs(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + data_result.append((*item).abs()); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - data_result.append(current_index.abs()); }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/tensor/math/acos/acos_fp/core.cairo b/src/operators/tensor/math/acos/acos_fp/core.cairo index 1b0934bc2..d991bffe2 100644 --- a/src/operators/tensor/math/acos/acos_fp/core.cairo +++ b/src/operators/tensor/math/acos/acos_fp/core.cairo @@ -4,8 +4,8 @@ use orion::operators::tensor::math::acos::acos_fp::fp8x23; use orion::operators::tensor::math::acos::acos_fp::fp16x16; /// Cf: TensorTrait::acos docstring -fn acos(self: @Tensor) -> Option> { - match *self.extra { +fn acos(self: Tensor) -> Option> { + match self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { FixedImpl::FP8x23(()) => Option::Some(fp8x23::acos(self)), diff --git a/src/operators/tensor/math/acos/acos_fp/fp16x16.cairo b/src/operators/tensor/math/acos/acos_fp/fp16x16.cairo index a78cacb6b..c2a4afa68 100644 --- a/src/operators/tensor/math/acos/acos_fp/fp16x16.cairo +++ b/src/operators/tensor/math/acos/acos_fp/fp16x16.cairo @@ -8,19 +8,18 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::acos docstring -fn acos(self: @Tensor) -> Tensor { +fn acos(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; - loop { - - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::acos(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).acos()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/acos/acos_fp/fp8x23.cairo b/src/operators/tensor/math/acos/acos_fp/fp8x23.cairo index a65503999..9a2c7ac84 100644 --- a/src/operators/tensor/math/acos/acos_fp/fp8x23.cairo +++ b/src/operators/tensor/math/acos/acos_fp/fp8x23.cairo @@ -8,19 +8,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::acos docstring -fn acos(self: @Tensor) -> Tensor { +fn acos(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; - loop { - - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::acos(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).acos()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/acosh/acosh_fp/core.cairo b/src/operators/tensor/math/acosh/acosh_fp/core.cairo index f2a129b7a..8aebbb985 100644 --- a/src/operators/tensor/math/acosh/acosh_fp/core.cairo +++ b/src/operators/tensor/math/acosh/acosh_fp/core.cairo @@ -8,11 +8,11 @@ fn acosh(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::acosh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::acosh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::acosh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::acosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::acosh(self)), + Option::None(_) => Option::Some(fp16x16::acosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::acosh(self)), + Option::None(_) => Option::Some(fp16x16::acosh(*self)), } } diff --git a/src/operators/tensor/math/acosh/acosh_fp/fp16x16.cairo b/src/operators/tensor/math/acosh/acosh_fp/fp16x16.cairo index f2c519684..5d619ccba 100644 --- a/src/operators/tensor/math/acosh/acosh_fp/fp16x16.cairo +++ b/src/operators/tensor/math/acosh/acosh_fp/fp16x16.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::acosh docstring -fn acosh(self: @Tensor) -> Tensor { +fn acosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::acosh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).acosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/acosh/acosh_fp/fp8x23.cairo b/src/operators/tensor/math/acosh/acosh_fp/fp8x23.cairo index 64c861fb0..2aaba0682 100644 --- a/src/operators/tensor/math/acosh/acosh_fp/fp8x23.cairo +++ b/src/operators/tensor/math/acosh/acosh_fp/fp8x23.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::acosh docstring -fn acosh(self: @Tensor) -> Tensor { +fn acosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::acosh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).acosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/acosh/acosh_i32/core.cairo b/src/operators/tensor/math/acosh/acosh_i32/core.cairo index 4d38f9fdd..1922952bd 100644 --- a/src/operators/tensor/math/acosh/acosh_i32/core.cairo +++ b/src/operators/tensor/math/acosh/acosh_i32/core.cairo @@ -9,11 +9,11 @@ fn acosh_i32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::acosh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::acosh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::acosh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::acosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::acosh(self)), + Option::None(_) => Option::Some(fp16x16::acosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::acosh(self)), + Option::None(_) => Option::Some(fp16x16::acosh(*self)), } } diff --git a/src/operators/tensor/math/acosh/acosh_i32/fp16x16.cairo b/src/operators/tensor/math/acosh/acosh_i32/fp16x16.cairo index 37894edeb..e13b27b16 100644 --- a/src/operators/tensor/math/acosh/acosh_i32/fp16x16.cairo +++ b/src/operators/tensor/math/acosh/acosh_i32/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::acosh docstring -fn acosh(self: @Tensor) -> Tensor { +fn acosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let val = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::acosh(val)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).acosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/acosh/acosh_i32/fp8x23.cairo b/src/operators/tensor/math/acosh/acosh_i32/fp8x23.cairo index a5c148020..0fb381607 100644 --- a/src/operators/tensor/math/acosh/acosh_i32/fp8x23.cairo +++ b/src/operators/tensor/math/acosh/acosh_i32/fp8x23.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::acosh docstring -fn acosh(self: @Tensor) -> Tensor { +fn acosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let val = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::acosh(val)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).acosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); -} + return TensorTrait::::new(self.shape, result.span(), self.extra); +} \ No newline at end of file diff --git a/src/operators/tensor/math/acosh/acosh_i8/core.cairo b/src/operators/tensor/math/acosh/acosh_i8/core.cairo index 21bafc741..b8b075a0b 100644 --- a/src/operators/tensor/math/acosh/acosh_i8/core.cairo +++ b/src/operators/tensor/math/acosh/acosh_i8/core.cairo @@ -9,11 +9,11 @@ fn acosh_i8(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::acosh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::acosh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::acosh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::acosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::acosh(self)), + Option::None(_) => Option::Some(fp16x16::acosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::acosh(self)), + Option::None(_) => Option::Some(fp16x16::acosh(*self)), } } diff --git a/src/operators/tensor/math/acosh/acosh_i8/fp16x16.cairo b/src/operators/tensor/math/acosh/acosh_i8/fp16x16.cairo index 48d57352b..282a5a1e5 100644 --- a/src/operators/tensor/math/acosh/acosh_i8/fp16x16.cairo +++ b/src/operators/tensor/math/acosh/acosh_i8/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::acosh docstring -fn acosh(self: @Tensor) -> Tensor { +fn acosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let val = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::acosh(val)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).acosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/acosh/acosh_i8/fp8x23.cairo b/src/operators/tensor/math/acosh/acosh_i8/fp8x23.cairo index 666d5063d..2df839315 100644 --- a/src/operators/tensor/math/acosh/acosh_i8/fp8x23.cairo +++ b/src/operators/tensor/math/acosh/acosh_i8/fp8x23.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::acosh docstring -fn acosh(self: @Tensor) -> Tensor { +fn acosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let val = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::acosh(val)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).acosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/acosh/acosh_u32/core.cairo b/src/operators/tensor/math/acosh/acosh_u32/core.cairo index 7f4b6da27..dc8f4be36 100644 --- a/src/operators/tensor/math/acosh/acosh_u32/core.cairo +++ b/src/operators/tensor/math/acosh/acosh_u32/core.cairo @@ -9,11 +9,11 @@ fn acosh_u32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::acosh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::acosh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::acosh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::acosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::acosh(self)), + Option::None(_) => Option::Some(fp16x16::acosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::acosh(self)), + Option::None(_) => Option::Some(fp16x16::acosh(*self)), } } diff --git a/src/operators/tensor/math/acosh/acosh_u32/fp16x16.cairo b/src/operators/tensor/math/acosh/acosh_u32/fp16x16.cairo index 968b76c1e..d588aa7a2 100644 --- a/src/operators/tensor/math/acosh/acosh_u32/fp16x16.cairo +++ b/src/operators/tensor/math/acosh/acosh_u32/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::acosh docstring -fn acosh(self: @Tensor) -> Tensor { +fn acosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::acosh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).acosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/acosh/acosh_u32/fp8x23.cairo b/src/operators/tensor/math/acosh/acosh_u32/fp8x23.cairo index c8b82e4fc..9c7308d84 100644 --- a/src/operators/tensor/math/acosh/acosh_u32/fp8x23.cairo +++ b/src/operators/tensor/math/acosh/acosh_u32/fp8x23.cairo @@ -11,19 +11,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::acosh docstring -fn acosh(self: @Tensor) -> Tensor { +fn acosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::acosh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).acosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + From 4b99683ec27b2fdbac06349e9d03246656fff3b5 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Wed, 23 Aug 2023 10:57:26 +0300 Subject: [PATCH 05/30] refactor argmax / argmin --- .../math/argmax/argmax_fp/fp16x16.cairo | 2 +- .../tensor/math/argmax/argmax_fp/fp8x23.cairo | 2 +- .../tensor/math/argmax/argmax_i32.cairo | 2 +- .../tensor/math/argmax/argmax_i8.cairo | 2 +- .../tensor/math/argmax/argmax_u32.cairo | 2 +- .../tensor/math/argmax/helpers.cairo | 45 +++++++++++------- .../math/argmin/argmin_fp/fp16x16.cairo | 2 +- .../tensor/math/argmin/argmin_fp/fp8x23.cairo | 2 +- .../tensor/math/argmin/argmin_i32.cairo | 2 +- .../tensor/math/argmin/argmin_i8.cairo | 2 +- .../tensor/math/argmin/argmin_u32.cairo | 2 +- .../tensor/math/argmin/helpers.cairo | 47 ++++++++++++------- 12 files changed, 66 insertions(+), 46 deletions(-) diff --git a/src/operators/tensor/math/argmax/argmax_fp/fp16x16.cairo b/src/operators/tensor/math/argmax/argmax_fp/fp16x16.cairo index 9110394df..c2648032a 100644 --- a/src/operators/tensor/math/argmax/argmax_fp/fp16x16.cairo +++ b/src/operators/tensor/math/argmax/argmax_fp/fp16x16.cairo @@ -28,7 +28,7 @@ fn argmax( assert(axis <= (*self.shape).len(), 'axis out of dimensions'); if (*self.shape).len() == 1 { - return find_argmax_1D(self, axis, true, select_last_index); + return find_argmax_1D(*self, axis, true, select_last_index); } let mut output_data = ArrayTrait::new(); diff --git a/src/operators/tensor/math/argmax/argmax_fp/fp8x23.cairo b/src/operators/tensor/math/argmax/argmax_fp/fp8x23.cairo index 350228c8e..d6fab9329 100644 --- a/src/operators/tensor/math/argmax/argmax_fp/fp8x23.cairo +++ b/src/operators/tensor/math/argmax/argmax_fp/fp8x23.cairo @@ -28,7 +28,7 @@ fn argmax( assert(axis <= (*self.shape).len(), 'axis out of dimensions'); if (*self.shape).len() == 1 { - return find_argmax_1D(self, axis, true, select_last_index); + return find_argmax_1D(*self, axis, true, select_last_index); } let mut output_data = ArrayTrait::new(); diff --git a/src/operators/tensor/math/argmax/argmax_i32.cairo b/src/operators/tensor/math/argmax/argmax_i32.cairo index 70123adf5..17b91dc73 100644 --- a/src/operators/tensor/math/argmax/argmax_i32.cairo +++ b/src/operators/tensor/math/argmax/argmax_i32.cairo @@ -25,7 +25,7 @@ fn argmax( assert(axis <= (*self.shape).len(), 'axis out of dimensions'); if (*self.shape).len() == 1 { - return find_argmax_1D(self, axis, true, select_last_index); + return find_argmax_1D(*self, axis, true, select_last_index); } let mut output_data = ArrayTrait::new(); diff --git a/src/operators/tensor/math/argmax/argmax_i8.cairo b/src/operators/tensor/math/argmax/argmax_i8.cairo index 147c9d447..6d1f63b37 100644 --- a/src/operators/tensor/math/argmax/argmax_i8.cairo +++ b/src/operators/tensor/math/argmax/argmax_i8.cairo @@ -25,7 +25,7 @@ fn argmax( assert(axis <= (*self.shape).len(), 'axis out of dimensions'); if (*self.shape).len() == 1 { - return find_argmax_1D(self, axis, true, select_last_index); + return find_argmax_1D(*self, axis, true, select_last_index); } let mut output_data = ArrayTrait::new(); diff --git a/src/operators/tensor/math/argmax/argmax_u32.cairo b/src/operators/tensor/math/argmax/argmax_u32.cairo index 80004b660..48d22725b 100644 --- a/src/operators/tensor/math/argmax/argmax_u32.cairo +++ b/src/operators/tensor/math/argmax/argmax_u32.cairo @@ -24,7 +24,7 @@ fn argmax( assert(axis <= (*self.shape).len(), 'axis out of dimensions'); if (*self.shape).len() == 1 { - return find_argmax_1D(self, axis, true, select_last_index); + return find_argmax_1D(*self, axis, true, select_last_index); } let mut output_data = ArrayTrait::new(); diff --git a/src/operators/tensor/math/argmax/helpers.cairo b/src/operators/tensor/math/argmax/helpers.cairo index 3b8a9ee4f..f13d52698 100644 --- a/src/operators/tensor/math/argmax/helpers.cairo +++ b/src/operators/tensor/math/argmax/helpers.cairo @@ -28,28 +28,37 @@ fn find_argmax_1D< impl TCopy: Copy, impl TDrop: Drop, >( - input: @Tensor, axis: usize, keepdims: bool, select_last_index: bool + mut input: Tensor, axis: usize, keepdims: bool, select_last_index: bool ) -> Tensor { let mut output_data = ArrayTrait::::new(); - let mut data = *input.data; - let mut max = *data.pop_front().unwrap(); - let mut max_index = 0_usize; - let mut count = 0_usize; - loop { - if data.len() == 0 { - break (); - }; + let mut max = match input.data.pop_front() { + Option::Some(item) => *item, + Option::None(_) => { + return TensorTrait::::new( + reduce_output_shape(input.shape, axis, keepdims), output_data.span(), input.extra + ); + } + }; + let mut max_index = 0; + let mut count = 0; - count += 1; + loop { + match input.data.pop_front() { + Option::Some(item) => { + count += 1; - let current_value = *data.pop_front().unwrap(); - if current_value > max { - max = current_value; - max_index = count; - } else { - if select_last_index && current_value == max { - max_index = count; + if *item > max { + max = *item; + max_index = count; + } else { + if select_last_index && item == @max { + max_index = count; + } + }; + }, + Option::None(_) => { + break; } }; }; @@ -57,7 +66,7 @@ fn find_argmax_1D< output_data.append(max_index); return TensorTrait::::new( - reduce_output_shape(*input.shape, axis, keepdims), output_data.span(), *input.extra + reduce_output_shape(input.shape, axis, keepdims), output_data.span(), input.extra ); } diff --git a/src/operators/tensor/math/argmin/argmin_fp/fp16x16.cairo b/src/operators/tensor/math/argmin/argmin_fp/fp16x16.cairo index 29212ab6b..0f620aa6a 100644 --- a/src/operators/tensor/math/argmin/argmin_fp/fp16x16.cairo +++ b/src/operators/tensor/math/argmin/argmin_fp/fp16x16.cairo @@ -28,7 +28,7 @@ fn argmin( assert(axis <= (*self.shape).len(), 'axis out of dimensions'); if (*self.shape).len() == 1 { - return find_argmin_1D(self, axis, true, select_last_index); + return find_argmin_1D(*self, axis, true, select_last_index); } let mut output_data = ArrayTrait::new(); diff --git a/src/operators/tensor/math/argmin/argmin_fp/fp8x23.cairo b/src/operators/tensor/math/argmin/argmin_fp/fp8x23.cairo index f4c96602c..ff567cec3 100644 --- a/src/operators/tensor/math/argmin/argmin_fp/fp8x23.cairo +++ b/src/operators/tensor/math/argmin/argmin_fp/fp8x23.cairo @@ -27,7 +27,7 @@ fn argmin( assert(axis <= (*self.shape).len(), 'axis out of dimensions'); if (*self.shape).len() == 1 { - return find_argmin_1D(self, axis, true, select_last_index); + return find_argmin_1D(*self, axis, true, select_last_index); } let mut output_data = ArrayTrait::new(); diff --git a/src/operators/tensor/math/argmin/argmin_i32.cairo b/src/operators/tensor/math/argmin/argmin_i32.cairo index 50ebd52e3..60448455b 100644 --- a/src/operators/tensor/math/argmin/argmin_i32.cairo +++ b/src/operators/tensor/math/argmin/argmin_i32.cairo @@ -25,7 +25,7 @@ fn argmin( assert(axis <= (*self.shape).len(), 'axis out of dimensions'); if (*self.shape).len() == 1 { - return find_argmin_1D(self, axis, true, select_last_index); + return find_argmin_1D(*self, axis, true, select_last_index); } let mut output_data = ArrayTrait::new(); diff --git a/src/operators/tensor/math/argmin/argmin_i8.cairo b/src/operators/tensor/math/argmin/argmin_i8.cairo index f52c7187d..fc4585b6b 100644 --- a/src/operators/tensor/math/argmin/argmin_i8.cairo +++ b/src/operators/tensor/math/argmin/argmin_i8.cairo @@ -25,7 +25,7 @@ fn argmin( assert(axis <= (*self.shape).len(), 'axis out of dimensions'); if (*self.shape).len() == 1 { - return find_argmin_1D(self, axis, true, select_last_index); + return find_argmin_1D(*self, axis, true, select_last_index); } let mut output_data = ArrayTrait::new(); diff --git a/src/operators/tensor/math/argmin/argmin_u32.cairo b/src/operators/tensor/math/argmin/argmin_u32.cairo index 0db767b83..0ef4e9a13 100644 --- a/src/operators/tensor/math/argmin/argmin_u32.cairo +++ b/src/operators/tensor/math/argmin/argmin_u32.cairo @@ -24,7 +24,7 @@ fn argmin( assert(axis <= (*self.shape).len(), 'axis out of dimensions'); if (*self.shape).len() == 1 { - return find_argmin_1D(self, axis, true, select_last_index); + return find_argmin_1D(*self, axis, true, select_last_index); } let mut output_data = ArrayTrait::new(); diff --git a/src/operators/tensor/math/argmin/helpers.cairo b/src/operators/tensor/math/argmin/helpers.cairo index ea4317fa2..895213d5c 100644 --- a/src/operators/tensor/math/argmin/helpers.cairo +++ b/src/operators/tensor/math/argmin/helpers.cairo @@ -28,36 +28,47 @@ fn find_argmin_1D< impl TCopy: Copy, impl TDrop: Drop, >( - input: @Tensor, axis: usize, keepdims: bool, select_last_index: bool + mut input: Tensor, axis: usize, keepdims: bool, select_last_index: bool ) -> Tensor { let mut output_data = ArrayTrait::::new(); - let mut data = *input.data; - let mut min = *data.pop_front().unwrap(); - let mut min_index = 0_usize; - let mut count = 0_usize; - loop { - if data.len() == 0 { - break (); - }; + let mut min = match input.data.pop_front() { + Option::Some(item) => *item, + Option::None(_) => { + return TensorTrait::::new( + reduce_output_shape(input.shape, axis, keepdims), output_data.span(), input.extra + ); + } + }; + let mut min_index = 0; + let mut count = 0; - count += 1; + loop { + match input.data.pop_front() { + Option::Some(item) => { + count += 1; - let current_value = *data.pop_front().unwrap(); - if current_value < min { - min = current_value; - min_index = count; - } else { - if select_last_index && current_value == min { - min_index = count; + if *item < min { + min = *item; + min_index = count; + } else { + if select_last_index && item == @min { + min_index = count; + } + }; + }, + Option::None(_) => { + break; } }; }; + + output_data.append(min_index); return TensorTrait::::new( - reduce_output_shape(*input.shape, axis, keepdims), output_data.span(), *input.extra + reduce_output_shape(input.shape, axis, keepdims), output_data.span(), input.extra ); } From 70be9e4099612633c5d0badda84eb6df39ef9336 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Wed, 23 Aug 2023 12:05:35 +0300 Subject: [PATCH 06/30] refactor helpers --- Scarb.toml | 3 + src/operators/tensor/helpers.cairo | 180 +++++++++++++++-------------- 2 files changed, 99 insertions(+), 84 deletions(-) diff --git a/Scarb.toml b/Scarb.toml index 6592e901f..3aa1fcb3c 100644 --- a/Scarb.toml +++ b/Scarb.toml @@ -5,6 +5,9 @@ version = "0.1.0" description = "ONNX Runtime in Cairo for verifiable ML inference using STARK" homepage = "https://github.com/gizatechxyz/orion" +[dependencies] +alexandria_data_structures = { git = "https://github.com/keep-starknet-strange/alexandria.git" } + [scripts] sierra = "cairo-compile . -r" docgen = "cd docgen && cargo run" diff --git a/src/operators/tensor/helpers.cairo b/src/operators/tensor/helpers.cairo index 74c3d49f5..6bd23d575 100644 --- a/src/operators/tensor/helpers.cairo +++ b/src/operators/tensor/helpers.cairo @@ -2,6 +2,8 @@ use array::ArrayTrait; use array::SpanTrait; use option::OptionTrait; +use alexandria_data_structures::array_ext::ArrayTraitExt; + use orion::utils::u32_max; use orion::operators::tensor::core::stride; @@ -19,11 +21,14 @@ fn len_from_shape(mut shape: Span) -> usize { let mut result: usize = 1; loop { - if shape.len() == 0 { - break (); - } - - result *= *shape.pop_front().unwrap(); + match shape.pop_front() { + Option::Some(item) => { + result *= *item; + }, + Option::None(_) => { + break; + } + }; }; return result; @@ -53,17 +58,19 @@ fn check_compatibility(mut shape_1: Span, mut shape_2: Span) { assert(shape_1.len() == shape_2.len(), 'tensors shape must match'); loop { - if shape_1.len() == 0 { - break (); - } - - let shape_1_val = *shape_1.pop_front().unwrap(); - let shape_2_val = *shape_2.pop_front().unwrap(); - - assert( - shape_1_val == shape_2_val || shape_1_val == 1 || shape_2_val == 1, - 'tensors shape must match' - ); + match shape_1.pop_front() { + Option::Some(shape_1_val) => { + let shape_2_val = *shape_2.pop_front().unwrap(); + + assert( + *shape_1_val == shape_2_val || *shape_1_val == 1 || shape_2_val == 1, + 'tensors shape must match' + ); + }, + Option::None(_) => { + break; + } + }; }; } @@ -85,15 +92,17 @@ fn broadcast_index_mapping(mut shape: Span, mut indices: Span) -> let mut stride = stride(shape); loop { - let indices_val = *indices.pop_front().unwrap(); - let shape_val = *shape.pop_front().unwrap(); - let stride_val = *stride.pop_front().unwrap(); - - let index = (indices_val % shape_val) * stride_val; - result += index; - - if shape.len() == 0 { - break (); + match shape.pop_front() { + Option::Some(shape_val) => { + let indices_val = *indices.pop_front().unwrap(); + let stride_val = *stride.pop_front().unwrap(); + + let index = (indices_val % *shape_val) * stride_val; + result += index; + }, + Option::None(_) => { + break; + } }; }; @@ -120,21 +129,22 @@ fn reduce_output_shape(mut input_shape: Span, axis: usize, keepdims: bool let mut n: usize = 0; loop { - if input_shape.len() == 0 { - break (); - } - - let current_dim = *input_shape.pop_front().unwrap(); - - if n == axis { - if keepdims { - output_shape.append(1); + match input_shape.pop_front() { + Option::Some(current_dim) => { + if n == axis { + if keepdims { + output_shape.append(1); + } + } else { + output_shape.append(*current_dim); + } + + n += 1; + }, + Option::None(_) => { + break; } - } else { - output_shape.append(current_dim); - } - - n += 1; + }; }; return output_shape.span(); @@ -159,14 +169,16 @@ fn permutation_output_shape(input_shape: Span, mut axes: Span) -> assert(input_shape.len() == axes_len, 'input_shape/indices len unequal'); let mut output_shape = ArrayTrait::new(); - let mut axis: usize = 0; - loop { - if axis == axes_len { - break (); - } - output_shape.append(*input_shape[*axes.pop_front().unwrap()]); - axis += 1; + loop { + match axes.pop_front() { + Option::Some(item) => { + output_shape.append(*input_shape[*item]); + }, + Option::None(_) => { + break; + } + }; }; return output_shape.span(); @@ -185,15 +197,14 @@ fn permutation_output_shape(input_shape: Span, mut axes: Span) -> /// /// # Returns /// * A Span of usize representing the combined indices. -fn combine_indices(output_indices: Span, axis_index: usize, axis: usize) -> Span { +fn combine_indices(mut output_indices: Span, axis_index: usize, axis: usize) -> Span { assert(axis <= output_indices.len(), 'axis value is out of range'); let mut result = ArrayTrait::new(); - let output_indices_len = output_indices.len(); let mut n: usize = 0; loop { - if n > output_indices_len { + if n > output_indices.len() { break (); } @@ -226,19 +237,22 @@ fn combine_indices(output_indices: Span, axis_index: usize, axis: usize) /// * A usize representing the index of the target axis in the given axes array. fn find_axis(mut axes: Span, target_axis: usize) -> usize { assert(target_axis < axes.len(), 'target_axis is out of range'); - let mut axis: usize = 0; - loop { - if axes.len() == 0 { - break (); - } - let current_axis = *axes.pop_front().unwrap(); - if current_axis == target_axis { - break (); - } - axis += 1; + loop { + match axes.pop_front() { + Option::Some(item) => { + if *item == target_axis { + break (); + } + axis += 1; + }, + Option::None(_) => { + break; + } + }; }; + return axis; } @@ -257,40 +271,38 @@ fn find_axis(mut axes: Span, target_axis: usize) -> usize { fn broadcast_shape(mut shape1: Span, mut shape2: Span) -> Span { check_compatibility(shape1, shape2); let mut result: Array = ArrayTrait::new(); - let mut temp_result = ArrayTrait::new(); loop { - // Get dimensions from shape1 and shape2, or use 1 if there are no more dimensions - let dim1 = if shape1.len() > 0 { - *shape1.pop_back().unwrap() - } else { - 1 + let mut dim1 = 1; + let mut dim2 = 1; + + match shape1.pop_back() { + Option::Some(item) => { + dim1 = *item; + }, + Option::None(_) => { + if shape1.len() == 0 && shape2.len() == 0 { + break (); + }; + } }; - let dim2 = if shape2.len() > 0 { - *shape2.pop_back().unwrap() - } else { - 1 + match shape2.pop_back() { + Option::Some(item) => { + dim2 = *item; + }, + Option::None(_) => { + if shape1.len() == 0 && shape2.len() == 0 { + break (); + }; + } }; let broadcasted_dim = u32_max(dim1, dim2); - temp_result.append(broadcasted_dim); - - if shape1.len() == 0 && shape2.len() == 0 { - break (); - }; + result.append(broadcasted_dim); }; - // Copy the broadcasted dimensions to the result array in the correct order - let mut temp_result: Span = temp_result.span(); - loop { - if temp_result.len() == 0 { - break (); - } - result.append(*temp_result.pop_back().unwrap()); - }; - - return result.span(); + return result.reverse().span(); } From 44e0d09f65d8d6e4cb71d01f7408479f819e2f85 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Wed, 23 Aug 2023 12:17:25 +0300 Subject: [PATCH 07/30] refactor asin asinh --- .../tensor/math/asin/asin_fp/core.cairo | 8 +++--- .../tensor/math/asin/asin_fp/fp16x16.cairo | 17 +++++++------ .../tensor/math/asin/asin_fp/fp8x23.cairo | 17 +++++++------ .../tensor/math/asinh/asinh_fp/core.cairo | 8 +++--- .../tensor/math/asinh/asinh_fp/fp16x16.cairo | 17 +++++++------ .../tensor/math/asinh/asinh_fp/fp8x23.cairo | 17 +++++++------ .../tensor/math/asinh/asinh_i32/core.cairo | 8 +++--- .../tensor/math/asinh/asinh_i32/fp16x16.cairo | 24 +++++++----------- .../tensor/math/asinh/asinh_i32/fp8x23.cairo | 25 ++++++++----------- .../tensor/math/asinh/asinh_i8/core.cairo | 8 +++--- .../tensor/math/asinh/asinh_i8/fp16x16.cairo | 25 ++++++++----------- .../tensor/math/asinh/asinh_i8/fp8x23.cairo | 25 ++++++++----------- .../tensor/math/asinh/asinh_u32/core.cairo | 8 +++--- .../tensor/math/asinh/asinh_u32/fp16x16.cairo | 18 ++++++------- .../tensor/math/asinh/asinh_u32/fp8x23.cairo | 19 +++++++------- 15 files changed, 114 insertions(+), 130 deletions(-) diff --git a/src/operators/tensor/math/asin/asin_fp/core.cairo b/src/operators/tensor/math/asin/asin_fp/core.cairo index 17ab8c2c1..89767e9bc 100644 --- a/src/operators/tensor/math/asin/asin_fp/core.cairo +++ b/src/operators/tensor/math/asin/asin_fp/core.cairo @@ -8,11 +8,11 @@ fn asin(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::asin(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::asin(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::asin(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::asin(*self)), }, - Option::None(_) => Option::Some(fp16x16::asin(self)), + Option::None(_) => Option::Some(fp16x16::asin(*self)), }, - Option::None(_) => Option::Some(fp16x16::asin(self)), + Option::None(_) => Option::Some(fp16x16::asin(*self)), } } diff --git a/src/operators/tensor/math/asin/asin_fp/fp16x16.cairo b/src/operators/tensor/math/asin/asin_fp/fp16x16.cairo index 50dc2cc95..f7cce7502 100644 --- a/src/operators/tensor/math/asin/asin_fp/fp16x16.cairo +++ b/src/operators/tensor/math/asin/asin_fp/fp16x16.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::asin docstring -fn asin(self: @Tensor) -> Tensor { +fn asin(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::asin(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).asin()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/asin/asin_fp/fp8x23.cairo b/src/operators/tensor/math/asin/asin_fp/fp8x23.cairo index 9c2b9c187..80db7fbd6 100644 --- a/src/operators/tensor/math/asin/asin_fp/fp8x23.cairo +++ b/src/operators/tensor/math/asin/asin_fp/fp8x23.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::asin docstring -fn asin(self: @Tensor) -> Tensor { +fn asin(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::asin(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).asin()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/asinh/asinh_fp/core.cairo b/src/operators/tensor/math/asinh/asinh_fp/core.cairo index c6652aed6..31bd8ae0d 100644 --- a/src/operators/tensor/math/asinh/asinh_fp/core.cairo +++ b/src/operators/tensor/math/asinh/asinh_fp/core.cairo @@ -8,11 +8,11 @@ fn asinh(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::asinh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::asinh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::asinh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::asinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::asinh(self)), + Option::None(_) => Option::Some(fp16x16::asinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::asinh(self)), + Option::None(_) => Option::Some(fp16x16::asinh(*self)), } } diff --git a/src/operators/tensor/math/asinh/asinh_fp/fp16x16.cairo b/src/operators/tensor/math/asinh/asinh_fp/fp16x16.cairo index d030efc02..a040ced67 100644 --- a/src/operators/tensor/math/asinh/asinh_fp/fp16x16.cairo +++ b/src/operators/tensor/math/asinh/asinh_fp/fp16x16.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::asinh docstring -fn asinh(self: @Tensor) -> Tensor { +fn asinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::asinh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).asinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/asinh/asinh_fp/fp8x23.cairo b/src/operators/tensor/math/asinh/asinh_fp/fp8x23.cairo index 7a4f0961a..0c72347f3 100644 --- a/src/operators/tensor/math/asinh/asinh_fp/fp8x23.cairo +++ b/src/operators/tensor/math/asinh/asinh_fp/fp8x23.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::asinh docstring -fn asinh(self: @Tensor) -> Tensor { +fn asinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::asinh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).asinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/asinh/asinh_i32/core.cairo b/src/operators/tensor/math/asinh/asinh_i32/core.cairo index 36800ec6f..db7728ee7 100644 --- a/src/operators/tensor/math/asinh/asinh_i32/core.cairo +++ b/src/operators/tensor/math/asinh/asinh_i32/core.cairo @@ -9,11 +9,11 @@ fn asinh_i32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::asinh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::asinh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::asinh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::asinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::asinh(self)), + Option::None(_) => Option::Some(fp16x16::asinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::asinh(self)), + Option::None(_) => Option::Some(fp16x16::asinh(*self)), } } diff --git a/src/operators/tensor/math/asinh/asinh_i32/fp16x16.cairo b/src/operators/tensor/math/asinh/asinh_i32/fp16x16.cairo index fe1d5924d..9db346fe5 100644 --- a/src/operators/tensor/math/asinh/asinh_i32/fp16x16.cairo +++ b/src/operators/tensor/math/asinh/asinh_i32/fp16x16.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::asinh docstring -fn asinh(self: @Tensor) -> Tensor { +fn asinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::asinh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::asinh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, false).asinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/asinh/asinh_i32/fp8x23.cairo b/src/operators/tensor/math/asinh/asinh_i32/fp8x23.cairo index 79c67e219..0cda4f4de 100644 --- a/src/operators/tensor/math/asinh/asinh_i32/fp8x23.cairo +++ b/src/operators/tensor/math/asinh/asinh_i32/fp8x23.cairo @@ -11,25 +11,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::asinh docstring -fn asinh(self: @Tensor) -> Tensor { +fn asinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::asinh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::asinh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, false).asinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/asinh/asinh_i8/core.cairo b/src/operators/tensor/math/asinh/asinh_i8/core.cairo index e654303bd..0215e9a86 100644 --- a/src/operators/tensor/math/asinh/asinh_i8/core.cairo +++ b/src/operators/tensor/math/asinh/asinh_i8/core.cairo @@ -9,11 +9,11 @@ fn asinh_i8(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::asinh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::asinh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::asinh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::asinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::asinh(self)), + Option::None(_) => Option::Some(fp16x16::asinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::asinh(self)), + Option::None(_) => Option::Some(fp16x16::asinh(*self)), } } diff --git a/src/operators/tensor/math/asinh/asinh_i8/fp16x16.cairo b/src/operators/tensor/math/asinh/asinh_i8/fp16x16.cairo index bff67b831..5feecccf6 100644 --- a/src/operators/tensor/math/asinh/asinh_i8/fp16x16.cairo +++ b/src/operators/tensor/math/asinh/asinh_i8/fp16x16.cairo @@ -11,25 +11,20 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::asinh docstring -fn asinh(self: @Tensor) -> Tensor { +fn asinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::asinh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::asinh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), false).asinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/asinh/asinh_i8/fp8x23.cairo b/src/operators/tensor/math/asinh/asinh_i8/fp8x23.cairo index 89f26b00b..58ac9f349 100644 --- a/src/operators/tensor/math/asinh/asinh_i8/fp8x23.cairo +++ b/src/operators/tensor/math/asinh/asinh_i8/fp8x23.cairo @@ -11,25 +11,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::asinh docstring -fn asinh(self: @Tensor) -> Tensor { +fn asinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::asinh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::asinh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), false).asinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/asinh/asinh_u32/core.cairo b/src/operators/tensor/math/asinh/asinh_u32/core.cairo index e2aed5d9b..5afc2550f 100644 --- a/src/operators/tensor/math/asinh/asinh_u32/core.cairo +++ b/src/operators/tensor/math/asinh/asinh_u32/core.cairo @@ -9,11 +9,11 @@ fn asinh_u32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::asinh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::asinh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::asinh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::asinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::asinh(self)), + Option::None(_) => Option::Some(fp16x16::asinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::asinh(self)), + Option::None(_) => Option::Some(fp16x16::asinh(*self)), } } diff --git a/src/operators/tensor/math/asinh/asinh_u32/fp16x16.cairo b/src/operators/tensor/math/asinh/asinh_u32/fp16x16.cairo index 674586bb3..7d73ec207 100644 --- a/src/operators/tensor/math/asinh/asinh_u32/fp16x16.cairo +++ b/src/operators/tensor/math/asinh/asinh_u32/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::asinh docstring -fn asinh(self: @Tensor) -> Tensor { +fn asinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::asinh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).asinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/asinh/asinh_u32/fp8x23.cairo b/src/operators/tensor/math/asinh/asinh_u32/fp8x23.cairo index f24edfc90..91fcad656 100644 --- a/src/operators/tensor/math/asinh/asinh_u32/fp8x23.cairo +++ b/src/operators/tensor/math/asinh/asinh_u32/fp8x23.cairo @@ -11,19 +11,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::asinh docstring -fn asinh(self: @Tensor) -> Tensor { +fn asinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::asinh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).asinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + From 1c800c490409ab02419408dee0fb237b1682e716 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Wed, 23 Aug 2023 12:48:42 +0300 Subject: [PATCH 08/30] refactor atan --- .../tensor/math/asinh/asinh_i32/fp16x16.cairo | 2 +- .../tensor/math/asinh/asinh_i32/fp8x23.cairo | 2 +- .../tensor/math/asinh/asinh_i8/fp16x16.cairo | 2 +- .../tensor/math/asinh/asinh_i8/fp8x23.cairo | 2 +- .../tensor/math/atan/atan_fp/core.cairo | 8 +++--- .../tensor/math/atan/atan_fp/fp16x16.cairo | 17 +++++++------ .../tensor/math/atan/atan_fp/fp8x23.cairo | 17 +++++++------ .../tensor/math/atan/atan_i32/core.cairo | 8 +++--- .../tensor/math/atan/atan_i32/fp16x16.cairo | 24 +++++++----------- .../tensor/math/atan/atan_i32/fp8x23.cairo | 24 +++++++----------- .../tensor/math/atan/atan_i8/core.cairo | 8 +++--- .../tensor/math/atan/atan_i8/fp16x16.cairo | 24 +++++++----------- .../tensor/math/atan/atan_i8/fp8x23.cairo | 25 ++++++++----------- .../tensor/math/atan/atan_u32/core.cairo | 8 +++--- .../tensor/math/atan/atan_u32/fp16x16.cairo | 18 ++++++------- .../tensor/math/atan/atan_u32/fp8x23.cairo | 18 ++++++------- 16 files changed, 93 insertions(+), 114 deletions(-) diff --git a/src/operators/tensor/math/asinh/asinh_i32/fp16x16.cairo b/src/operators/tensor/math/asinh/asinh_i32/fp16x16.cairo index 9db346fe5..a79e0f3d1 100644 --- a/src/operators/tensor/math/asinh/asinh_i32/fp16x16.cairo +++ b/src/operators/tensor/math/asinh/asinh_i32/fp16x16.cairo @@ -17,7 +17,7 @@ fn asinh(mut self: Tensor) -> Tensor { loop { match self.data.pop_front() { Option::Some(item) => { - result.append(FixedTrait::new_unscaled(*item.mag, false).asinh()); + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).asinh()); }, Option::None(_) => { break; diff --git a/src/operators/tensor/math/asinh/asinh_i32/fp8x23.cairo b/src/operators/tensor/math/asinh/asinh_i32/fp8x23.cairo index 0cda4f4de..da47d5205 100644 --- a/src/operators/tensor/math/asinh/asinh_i32/fp8x23.cairo +++ b/src/operators/tensor/math/asinh/asinh_i32/fp8x23.cairo @@ -17,7 +17,7 @@ fn asinh(mut self: Tensor) -> Tensor { loop { match self.data.pop_front() { Option::Some(item) => { - result.append(FixedTrait::new_unscaled(*item.mag, false).asinh()); + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).asinh()); }, Option::None(_) => { break; diff --git a/src/operators/tensor/math/asinh/asinh_i8/fp16x16.cairo b/src/operators/tensor/math/asinh/asinh_i8/fp16x16.cairo index 5feecccf6..7a95383ae 100644 --- a/src/operators/tensor/math/asinh/asinh_i8/fp16x16.cairo +++ b/src/operators/tensor/math/asinh/asinh_i8/fp16x16.cairo @@ -17,7 +17,7 @@ fn asinh(mut self: Tensor) -> Tensor { loop { match self.data.pop_front() { Option::Some(item) => { - result.append(FixedTrait::new_unscaled((*item.mag).into(), false).asinh()); + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).asinh()); }, Option::None(_) => { break; diff --git a/src/operators/tensor/math/asinh/asinh_i8/fp8x23.cairo b/src/operators/tensor/math/asinh/asinh_i8/fp8x23.cairo index 58ac9f349..6ee4de9de 100644 --- a/src/operators/tensor/math/asinh/asinh_i8/fp8x23.cairo +++ b/src/operators/tensor/math/asinh/asinh_i8/fp8x23.cairo @@ -17,7 +17,7 @@ fn asinh(mut self: Tensor) -> Tensor { loop { match self.data.pop_front() { Option::Some(item) => { - result.append(FixedTrait::new_unscaled((*item.mag).into(), false).asinh()); + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).asinh()); }, Option::None(_) => { break; diff --git a/src/operators/tensor/math/atan/atan_fp/core.cairo b/src/operators/tensor/math/atan/atan_fp/core.cairo index 5f54e6199..3d06bab6f 100644 --- a/src/operators/tensor/math/atan/atan_fp/core.cairo +++ b/src/operators/tensor/math/atan/atan_fp/core.cairo @@ -8,11 +8,11 @@ fn atan(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::atan(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::atan(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::atan(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::atan(*self)), }, - Option::None(_) => Option::Some(fp16x16::atan(self)), + Option::None(_) => Option::Some(fp16x16::atan(*self)), }, - Option::None(_) => Option::Some(fp16x16::atan(self)), + Option::None(_) => Option::Some(fp16x16::atan(*self)), } } diff --git a/src/operators/tensor/math/atan/atan_fp/fp16x16.cairo b/src/operators/tensor/math/atan/atan_fp/fp16x16.cairo index c62c45c4b..c4235c39a 100644 --- a/src/operators/tensor/math/atan/atan_fp/fp16x16.cairo +++ b/src/operators/tensor/math/atan/atan_fp/fp16x16.cairo @@ -8,18 +8,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; -fn atan(self: @Tensor) -> Tensor { +fn atan(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::atan(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).atan()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/atan/atan_fp/fp8x23.cairo b/src/operators/tensor/math/atan/atan_fp/fp8x23.cairo index 1ad5b3dbc..b1bd2f050 100644 --- a/src/operators/tensor/math/atan/atan_fp/fp8x23.cairo +++ b/src/operators/tensor/math/atan/atan_fp/fp8x23.cairo @@ -8,18 +8,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; -fn atan(self: @Tensor) -> Tensor { +fn atan(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::atan(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).atan()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/atan/atan_i32/core.cairo b/src/operators/tensor/math/atan/atan_i32/core.cairo index 9055bdb11..57bb7d649 100644 --- a/src/operators/tensor/math/atan/atan_i32/core.cairo +++ b/src/operators/tensor/math/atan/atan_i32/core.cairo @@ -8,11 +8,11 @@ fn atan_i32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::atan(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::atan(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::atan(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::atan(*self)), }, - Option::None(_) => Option::Some(fp16x16::atan(self)), + Option::None(_) => Option::Some(fp16x16::atan(*self)), }, - Option::None(_) => Option::Some(fp16x16::atan(self)), + Option::None(_) => Option::Some(fp16x16::atan(*self)), } } diff --git a/src/operators/tensor/math/atan/atan_i32/fp16x16.cairo b/src/operators/tensor/math/atan/atan_i32/fp16x16.cairo index 516238e1f..600c9a57d 100644 --- a/src/operators/tensor/math/atan/atan_i32/fp16x16.cairo +++ b/src/operators/tensor/math/atan/atan_i32/fp16x16.cairo @@ -10,25 +10,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; -fn atan(self: @Tensor) -> Tensor { +fn atan(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::atan(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::atan(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item).mag, *item.sign).atan()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/atan/atan_i32/fp8x23.cairo b/src/operators/tensor/math/atan/atan_i32/fp8x23.cairo index 237be5a70..49bb0b660 100644 --- a/src/operators/tensor/math/atan/atan_i32/fp8x23.cairo +++ b/src/operators/tensor/math/atan/atan_i32/fp8x23.cairo @@ -10,25 +10,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; -fn atan(self: @Tensor) -> Tensor { +fn atan(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::atan(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::atan(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item).mag, *item.sign).atan()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/atan/atan_i8/core.cairo b/src/operators/tensor/math/atan/atan_i8/core.cairo index f896247f0..e552560d1 100644 --- a/src/operators/tensor/math/atan/atan_i8/core.cairo +++ b/src/operators/tensor/math/atan/atan_i8/core.cairo @@ -8,11 +8,11 @@ fn atan_i8(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::atan(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::atan(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::atan(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::atan(*self)), }, - Option::None(_) => Option::Some(fp16x16::atan(self)), + Option::None(_) => Option::Some(fp16x16::atan(*self)), }, - Option::None(_) => Option::Some(fp16x16::atan(self)), + Option::None(_) => Option::Some(fp16x16::atan(*self)), } } diff --git a/src/operators/tensor/math/atan/atan_i8/fp16x16.cairo b/src/operators/tensor/math/atan/atan_i8/fp16x16.cairo index 5c34693fe..4b75090dd 100644 --- a/src/operators/tensor/math/atan/atan_i8/fp16x16.cairo +++ b/src/operators/tensor/math/atan/atan_i8/fp16x16.cairo @@ -10,25 +10,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; -fn atan(self: @Tensor) -> Tensor { +fn atan(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::atan(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::atan(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item).mag.into(), *item.sign).atan()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/atan/atan_i8/fp8x23.cairo b/src/operators/tensor/math/atan/atan_i8/fp8x23.cairo index d6d5acf80..fcfd18a25 100644 --- a/src/operators/tensor/math/atan/atan_i8/fp8x23.cairo +++ b/src/operators/tensor/math/atan/atan_i8/fp8x23.cairo @@ -10,25 +10,20 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; -fn atan(self: @Tensor) -> Tensor { +fn atan(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::atan(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::atan(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item).mag.into(), *item.sign).atan()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/atan/atan_u32/core.cairo b/src/operators/tensor/math/atan/atan_u32/core.cairo index 5ca726fe4..400b9fa07 100644 --- a/src/operators/tensor/math/atan/atan_u32/core.cairo +++ b/src/operators/tensor/math/atan/atan_u32/core.cairo @@ -8,11 +8,11 @@ fn atan_u32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::atan(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::atan(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::atan(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::atan(*self)), }, - Option::None(_) => Option::Some(fp16x16::atan(self)), + Option::None(_) => Option::Some(fp16x16::atan(*self)), }, - Option::None(_) => Option::Some(fp16x16::atan(self)), + Option::None(_) => Option::Some(fp16x16::atan(*self)), } } diff --git a/src/operators/tensor/math/atan/atan_u32/fp16x16.cairo b/src/operators/tensor/math/atan/atan_u32/fp16x16.cairo index 57963ade5..c3ea1677c 100644 --- a/src/operators/tensor/math/atan/atan_u32/fp16x16.cairo +++ b/src/operators/tensor/math/atan/atan_u32/fp16x16.cairo @@ -10,19 +10,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; -fn atan(self: @Tensor) -> Tensor { +fn atan(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::atan(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).atan()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/atan/atan_u32/fp8x23.cairo b/src/operators/tensor/math/atan/atan_u32/fp8x23.cairo index 613441870..f523e3cdc 100644 --- a/src/operators/tensor/math/atan/atan_u32/fp8x23.cairo +++ b/src/operators/tensor/math/atan/atan_u32/fp8x23.cairo @@ -10,19 +10,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; -fn atan(self: @Tensor) -> Tensor { +fn atan(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::atan(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).atan()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } From 98b3621d8a3c9117e2804c4b77e082c26714e198 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Wed, 23 Aug 2023 12:51:51 +0300 Subject: [PATCH 09/30] refactor ceil --- .../tensor/math/ceil/ceil_fp/core.cairo | 8 ++++---- .../tensor/math/ceil/ceil_fp/fp16x16.cairo | 18 ++++++++++-------- .../tensor/math/ceil/ceil_fp/fp8x23.cairo | 19 +++++++++++-------- 3 files changed, 25 insertions(+), 20 deletions(-) diff --git a/src/operators/tensor/math/ceil/ceil_fp/core.cairo b/src/operators/tensor/math/ceil/ceil_fp/core.cairo index c69139305..8b40cc10c 100644 --- a/src/operators/tensor/math/ceil/ceil_fp/core.cairo +++ b/src/operators/tensor/math/ceil/ceil_fp/core.cairo @@ -10,11 +10,11 @@ fn ceil(z: @Tensor) -> Option> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::ceil(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::ceil(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::ceil(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::ceil(*z)), }, - Option::None(_) => Option::Some(fp16x16::ceil(z)), + Option::None(_) => Option::Some(fp16x16::ceil(*z)), }, - Option::None(_) => Option::Some(fp16x16::ceil(z)), + Option::None(_) => Option::Some(fp16x16::ceil(*z)), } } diff --git a/src/operators/tensor/math/ceil/ceil_fp/fp16x16.cairo b/src/operators/tensor/math/ceil/ceil_fp/fp16x16.cairo index 578a72519..ea3ce845c 100644 --- a/src/operators/tensor/math/ceil/ceil_fp/fp16x16.cairo +++ b/src/operators/tensor/math/ceil/ceil_fp/fp16x16.cairo @@ -8,17 +8,19 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: TensorTrait::ceil docstring -fn ceil(z: @Tensor) -> Tensor { +fn ceil(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + data_result.append((*item).ceil()); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - data_result.append(current_index.ceil()); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/tensor/math/ceil/ceil_fp/fp8x23.cairo b/src/operators/tensor/math/ceil/ceil_fp/fp8x23.cairo index 3fbf1cebe..814618b1e 100644 --- a/src/operators/tensor/math/ceil/ceil_fp/fp8x23.cairo +++ b/src/operators/tensor/math/ceil/ceil_fp/fp8x23.cairo @@ -8,17 +8,20 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: TensorTrait::ceil docstring -fn ceil(z: @Tensor) -> Tensor { +fn ceil(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + data_result.append((*item).ceil()); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - data_result.append(current_index.ceil()); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::new(z.shape, data_result.span(), z.extra); } + From 907224e2adca533b70c9c1df461ff0202d0542a0 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Wed, 23 Aug 2023 12:59:36 +0300 Subject: [PATCH 10/30] refactor cos --- .../tensor/math/cos/cos_fp/core.cairo | 8 +++---- .../tensor/math/cos/cos_fp/fp16x16.cairo | 17 ++++++------- .../tensor/math/cos/cos_fp/fp8x23.cairo | 17 ++++++------- .../tensor/math/cos/cos_i32/core.cairo | 8 +++---- .../tensor/math/cos/cos_i32/fp16x16.cairo | 24 +++++++------------ .../tensor/math/cos/cos_i32/fp8x23.cairo | 24 +++++++------------ .../tensor/math/cos/cos_i8/core.cairo | 8 +++---- .../tensor/math/cos/cos_i8/fp16x16.cairo | 24 +++++++------------ .../tensor/math/cos/cos_i8/fp8x23.cairo | 24 +++++++------------ .../tensor/math/cos/cos_u32/core.cairo | 8 +++---- .../tensor/math/cos/cos_u32/fp16x16.cairo | 18 +++++++------- .../tensor/math/cos/cos_u32/fp8x23.cairo | 18 +++++++------- 12 files changed, 88 insertions(+), 110 deletions(-) diff --git a/src/operators/tensor/math/cos/cos_fp/core.cairo b/src/operators/tensor/math/cos/cos_fp/core.cairo index 32757f462..cae21faf3 100644 --- a/src/operators/tensor/math/cos/cos_fp/core.cairo +++ b/src/operators/tensor/math/cos/cos_fp/core.cairo @@ -8,11 +8,11 @@ fn cos(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::cos(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::cos(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::cos(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::cos(*self)), }, - Option::None(_) => Option::Some(fp16x16::cos(self)), + Option::None(_) => Option::Some(fp16x16::cos(*self)), }, - Option::None(_) => Option::Some(fp16x16::cos(self)), + Option::None(_) => Option::Some(fp16x16::cos(*self)), } } diff --git a/src/operators/tensor/math/cos/cos_fp/fp16x16.cairo b/src/operators/tensor/math/cos/cos_fp/fp16x16.cairo index d0833ee82..747638960 100644 --- a/src/operators/tensor/math/cos/cos_fp/fp16x16.cairo +++ b/src/operators/tensor/math/cos/cos_fp/fp16x16.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::cos docstring -fn cos(self: @Tensor) -> Tensor { +fn cos(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::cos(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).cos()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/cos/cos_fp/fp8x23.cairo b/src/operators/tensor/math/cos/cos_fp/fp8x23.cairo index 19b7c9f2c..88878ac8f 100644 --- a/src/operators/tensor/math/cos/cos_fp/fp8x23.cairo +++ b/src/operators/tensor/math/cos/cos_fp/fp8x23.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::cos docstring -fn cos(self: @Tensor) -> Tensor { +fn cos(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::cos(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).cos()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/cos/cos_i32/core.cairo b/src/operators/tensor/math/cos/cos_i32/core.cairo index 1a019edde..baf106dd2 100644 --- a/src/operators/tensor/math/cos/cos_i32/core.cairo +++ b/src/operators/tensor/math/cos/cos_i32/core.cairo @@ -9,11 +9,11 @@ fn cos_i32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::cos(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::cos(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::cos(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::cos(*self)), }, - Option::None(_) => Option::Some(fp16x16::cos(self)), + Option::None(_) => Option::Some(fp16x16::cos(*self)), }, - Option::None(_) => Option::Some(fp16x16::cos(self)), + Option::None(_) => Option::Some(fp16x16::cos(*self)), } } diff --git a/src/operators/tensor/math/cos/cos_i32/fp16x16.cairo b/src/operators/tensor/math/cos/cos_i32/fp16x16.cairo index 1c878b9a3..a3f9a67aa 100644 --- a/src/operators/tensor/math/cos/cos_i32/fp16x16.cairo +++ b/src/operators/tensor/math/cos/cos_i32/fp16x16.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::cos docstring -fn cos(self: @Tensor) -> Tensor { +fn cos(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cos(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cos(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).cos()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/cos/cos_i32/fp8x23.cairo b/src/operators/tensor/math/cos/cos_i32/fp8x23.cairo index 6b56aa5bf..794c6ccaa 100644 --- a/src/operators/tensor/math/cos/cos_i32/fp8x23.cairo +++ b/src/operators/tensor/math/cos/cos_i32/fp8x23.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::cos docstring -fn cos(self: @Tensor) -> Tensor { +fn cos(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cos(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cos(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).cos()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/cos/cos_i8/core.cairo b/src/operators/tensor/math/cos/cos_i8/core.cairo index 21ce7587c..7b66ebf58 100644 --- a/src/operators/tensor/math/cos/cos_i8/core.cairo +++ b/src/operators/tensor/math/cos/cos_i8/core.cairo @@ -9,11 +9,11 @@ fn cos_i8(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::cos(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::cos(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::cos(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::cos(*self)), }, - Option::None(_) => Option::Some(fp16x16::cos(self)), + Option::None(_) => Option::Some(fp16x16::cos(*self)), }, - Option::None(_) => Option::Some(fp16x16::cos(self)), + Option::None(_) => Option::Some(fp16x16::cos(*self)), } } diff --git a/src/operators/tensor/math/cos/cos_i8/fp16x16.cairo b/src/operators/tensor/math/cos/cos_i8/fp16x16.cairo index 06b9fcfb5..c6e3e03be 100644 --- a/src/operators/tensor/math/cos/cos_i8/fp16x16.cairo +++ b/src/operators/tensor/math/cos/cos_i8/fp16x16.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::cos docstring -fn cos(self: @Tensor) -> Tensor { +fn cos(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cos(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cos(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).cos()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/cos/cos_i8/fp8x23.cairo b/src/operators/tensor/math/cos/cos_i8/fp8x23.cairo index 4754dec8a..8be684847 100644 --- a/src/operators/tensor/math/cos/cos_i8/fp8x23.cairo +++ b/src/operators/tensor/math/cos/cos_i8/fp8x23.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::cos docstring -fn cos(self: @Tensor) -> Tensor { +fn cos(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cos(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cos(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).cos()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/cos/cos_u32/core.cairo b/src/operators/tensor/math/cos/cos_u32/core.cairo index d8f7fe368..322b1010f 100644 --- a/src/operators/tensor/math/cos/cos_u32/core.cairo +++ b/src/operators/tensor/math/cos/cos_u32/core.cairo @@ -9,11 +9,11 @@ fn cos_u32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::cos(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::cos(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::cos(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::cos(*self)), }, - Option::None(_) => Option::Some(fp16x16::cos(self)), + Option::None(_) => Option::Some(fp16x16::cos(*self)), }, - Option::None(_) => Option::Some(fp16x16::cos(self)), + Option::None(_) => Option::Some(fp16x16::cos(*self)), } } diff --git a/src/operators/tensor/math/cos/cos_u32/fp16x16.cairo b/src/operators/tensor/math/cos/cos_u32/fp16x16.cairo index 84501141e..b7450b8e4 100644 --- a/src/operators/tensor/math/cos/cos_u32/fp16x16.cairo +++ b/src/operators/tensor/math/cos/cos_u32/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::cos docstring -fn cos(self: @Tensor) -> Tensor { +fn cos(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::cos(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).cos()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/cos/cos_u32/fp8x23.cairo b/src/operators/tensor/math/cos/cos_u32/fp8x23.cairo index 45a92bf7e..4b7f9cb79 100644 --- a/src/operators/tensor/math/cos/cos_u32/fp8x23.cairo +++ b/src/operators/tensor/math/cos/cos_u32/fp8x23.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::cos docstring -fn cos(self: @Tensor) -> Tensor { +fn cos(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::cos(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).cos()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } From 09458b27350010edf99dd58ac02f69f91a8e481a Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Wed, 23 Aug 2023 13:14:55 +0300 Subject: [PATCH 11/30] refactor cosh --- .../tensor/math/cosh/cosh_fp/core.cairo | 8 +++--- .../tensor/math/cosh/cosh_fp/fp16x16.cairo | 19 ++++++------- .../tensor/math/cosh/cosh_fp/fp8x23.cairo | 17 ++++++------ .../tensor/math/cosh/cosh_i32/core.cairo | 8 +++--- .../tensor/math/cosh/cosh_i32/fp16x16.cairo | 26 +++++++----------- .../tensor/math/cosh/cosh_i32/fp8x23.cairo | 27 ++++++++----------- .../tensor/math/cosh/cosh_i8/core.cairo | 8 +++--- .../tensor/math/cosh/cosh_i8/fp16x16.cairo | 24 +++++++---------- .../tensor/math/cosh/cosh_i8/fp8x23.cairo | 25 +++++++---------- .../tensor/math/cosh/cosh_u32/core.cairo | 8 +++--- .../tensor/math/cosh/cosh_u32/fp16x16.cairo | 18 ++++++------- .../tensor/math/cosh/cosh_u32/fp8x23.cairo | 19 ++++++------- 12 files changed, 94 insertions(+), 113 deletions(-) diff --git a/src/operators/tensor/math/cosh/cosh_fp/core.cairo b/src/operators/tensor/math/cosh/cosh_fp/core.cairo index 5d6f922d3..0ff8378bc 100644 --- a/src/operators/tensor/math/cosh/cosh_fp/core.cairo +++ b/src/operators/tensor/math/cosh/cosh_fp/core.cairo @@ -8,11 +8,11 @@ fn cosh(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::cosh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::cosh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::cosh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::cosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::cosh(self)), + Option::None(_) => Option::Some(fp16x16::cosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::cosh(self)), + Option::None(_) => Option::Some(fp16x16::cosh(*self)), } } diff --git a/src/operators/tensor/math/cosh/cosh_fp/fp16x16.cairo b/src/operators/tensor/math/cosh/cosh_fp/fp16x16.cairo index c90337cfc..17d6a4bb5 100644 --- a/src/operators/tensor/math/cosh/cosh_fp/fp16x16.cairo +++ b/src/operators/tensor/math/cosh/cosh_fp/fp16x16.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::cosh docstring -fn cosh(self: @Tensor) -> Tensor { +fn cosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::cosh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).cosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); -} + return TensorTrait::::new(self.shape, result.span(), self.extra); +} \ No newline at end of file diff --git a/src/operators/tensor/math/cosh/cosh_fp/fp8x23.cairo b/src/operators/tensor/math/cosh/cosh_fp/fp8x23.cairo index 4163dcd3e..d8ec5ad1d 100644 --- a/src/operators/tensor/math/cosh/cosh_fp/fp8x23.cairo +++ b/src/operators/tensor/math/cosh/cosh_fp/fp8x23.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::cosh docstring -fn cosh(self: @Tensor) -> Tensor { +fn cosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::cosh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).cosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/cosh/cosh_i32/core.cairo b/src/operators/tensor/math/cosh/cosh_i32/core.cairo index 70367409a..ed557dfdf 100644 --- a/src/operators/tensor/math/cosh/cosh_i32/core.cairo +++ b/src/operators/tensor/math/cosh/cosh_i32/core.cairo @@ -9,11 +9,11 @@ fn cosh_i32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::cosh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::cosh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::cosh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::cosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::cosh(self)), + Option::None(_) => Option::Some(fp16x16::cosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::cosh(self)), + Option::None(_) => Option::Some(fp16x16::cosh(*self)), } } diff --git a/src/operators/tensor/math/cosh/cosh_i32/fp16x16.cairo b/src/operators/tensor/math/cosh/cosh_i32/fp16x16.cairo index e9a2408c6..d4f381d0b 100644 --- a/src/operators/tensor/math/cosh/cosh_i32/fp16x16.cairo +++ b/src/operators/tensor/math/cosh/cosh_i32/fp16x16.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::cosh docstring -fn cosh(self: @Tensor) -> Tensor { - let mut result = ArrayTrait::new(); - let mut data = *self.data; +fn cosh(mut self: Tensor) -> Tensor { + let mut result = ArrayTrait::new(); loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cosh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cosh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).cosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/cosh/cosh_i32/fp8x23.cairo b/src/operators/tensor/math/cosh/cosh_i32/fp8x23.cairo index 1e8ddff53..395214d5f 100644 --- a/src/operators/tensor/math/cosh/cosh_i32/fp8x23.cairo +++ b/src/operators/tensor/math/cosh/cosh_i32/fp8x23.cairo @@ -11,25 +11,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::cosh docstring -fn cosh(self: @Tensor) -> Tensor { - let mut result = ArrayTrait::new(); - let mut data = *self.data; +fn cosh(mut self: Tensor) -> Tensor { + let mut result = ArrayTrait::new(); loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cosh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cosh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).cosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/cosh/cosh_i8/core.cairo b/src/operators/tensor/math/cosh/cosh_i8/core.cairo index 2bc840215..cf1b18d50 100644 --- a/src/operators/tensor/math/cosh/cosh_i8/core.cairo +++ b/src/operators/tensor/math/cosh/cosh_i8/core.cairo @@ -9,11 +9,11 @@ fn cosh_i8(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::cosh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::cosh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::cosh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::cosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::cosh(self)), + Option::None(_) => Option::Some(fp16x16::cosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::cosh(self)), + Option::None(_) => Option::Some(fp16x16::cosh(*self)), } } diff --git a/src/operators/tensor/math/cosh/cosh_i8/fp16x16.cairo b/src/operators/tensor/math/cosh/cosh_i8/fp16x16.cairo index ab9bdfcc7..7ab17385d 100644 --- a/src/operators/tensor/math/cosh/cosh_i8/fp16x16.cairo +++ b/src/operators/tensor/math/cosh/cosh_i8/fp16x16.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::cosh docstring -fn cosh(self: @Tensor) -> Tensor { +fn cosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cosh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cosh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).cosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/cosh/cosh_i8/fp8x23.cairo b/src/operators/tensor/math/cosh/cosh_i8/fp8x23.cairo index e6edbd0f6..61fdbdee2 100644 --- a/src/operators/tensor/math/cosh/cosh_i8/fp8x23.cairo +++ b/src/operators/tensor/math/cosh/cosh_i8/fp8x23.cairo @@ -11,25 +11,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::cosh docstring -fn cosh(self: @Tensor) -> Tensor { +fn cosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cosh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::cosh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).cosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/cosh/cosh_u32/core.cairo b/src/operators/tensor/math/cosh/cosh_u32/core.cairo index e1b67beea..8017aa912 100644 --- a/src/operators/tensor/math/cosh/cosh_u32/core.cairo +++ b/src/operators/tensor/math/cosh/cosh_u32/core.cairo @@ -9,11 +9,11 @@ fn cosh_u32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::cosh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::cosh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::cosh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::cosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::cosh(self)), + Option::None(_) => Option::Some(fp16x16::cosh(*self)), }, - Option::None(_) => Option::Some(fp16x16::cosh(self)), + Option::None(_) => Option::Some(fp16x16::cosh(*self)), } } diff --git a/src/operators/tensor/math/cosh/cosh_u32/fp16x16.cairo b/src/operators/tensor/math/cosh/cosh_u32/fp16x16.cairo index ff624ac32..c360a9d7b 100644 --- a/src/operators/tensor/math/cosh/cosh_u32/fp16x16.cairo +++ b/src/operators/tensor/math/cosh/cosh_u32/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::cosh docstring -fn cosh(self: @Tensor) -> Tensor { +fn cosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::cosh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).cosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/cosh/cosh_u32/fp8x23.cairo b/src/operators/tensor/math/cosh/cosh_u32/fp8x23.cairo index a5d49bb22..69e00eba4 100644 --- a/src/operators/tensor/math/cosh/cosh_u32/fp8x23.cairo +++ b/src/operators/tensor/math/cosh/cosh_u32/fp8x23.cairo @@ -11,19 +11,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::cosh docstring -fn cosh(self: @Tensor) -> Tensor { +fn cosh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::cosh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).cosh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + From 4ed5d7c7bee33a3fc632da867c9cff0c69be5774 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Wed, 23 Aug 2023 13:46:12 +0300 Subject: [PATCH 12/30] refactor exp --- .../tensor/math/exp/exp_fp/core.cairo | 8 ++++---- .../tensor/math/exp/exp_fp/fp16x16.cairo | 17 +++++++++-------- .../tensor/math/exp/exp_fp/fp8x23.cairo | 17 +++++++++-------- .../tensor/math/exp/exp_i32/core.cairo | 8 ++++---- .../tensor/math/exp/exp_i32/fp16x16.cairo | 18 +++++++++--------- .../tensor/math/exp/exp_i32/fp8x23.cairo | 18 +++++++++--------- .../tensor/math/exp/exp_i8/core.cairo | 8 ++++---- .../tensor/math/exp/exp_i8/fp16x16.cairo | 18 +++++++++--------- .../tensor/math/exp/exp_i8/fp8x23.cairo | 18 +++++++++--------- .../tensor/math/exp/exp_u32/core.cairo | 8 ++++---- .../tensor/math/exp/exp_u32/fp16x16.cairo | 18 +++++++++--------- .../tensor/math/exp/exp_u32/fp8x23.cairo | 18 +++++++++--------- 12 files changed, 88 insertions(+), 86 deletions(-) diff --git a/src/operators/tensor/math/exp/exp_fp/core.cairo b/src/operators/tensor/math/exp/exp_fp/core.cairo index d3cd5647c..b26551f69 100644 --- a/src/operators/tensor/math/exp/exp_fp/core.cairo +++ b/src/operators/tensor/math/exp/exp_fp/core.cairo @@ -8,11 +8,11 @@ fn exp(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::exp(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::exp(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::exp(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::exp(*self)), }, - Option::None(_) => Option::Some(fp16x16::exp(self)), + Option::None(_) => Option::Some(fp16x16::exp(*self)), }, - Option::None(_) => Option::Some(fp16x16::exp(self)), + Option::None(_) => Option::Some(fp16x16::exp(*self)), } } diff --git a/src/operators/tensor/math/exp/exp_fp/fp16x16.cairo b/src/operators/tensor/math/exp/exp_fp/fp16x16.cairo index 91113fc9e..a94ffe263 100644 --- a/src/operators/tensor/math/exp/exp_fp/fp16x16.cairo +++ b/src/operators/tensor/math/exp/exp_fp/fp16x16.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::exp docstring -fn exp(self: @Tensor) -> Tensor { +fn exp(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::exp(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).exp()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/exp/exp_fp/fp8x23.cairo b/src/operators/tensor/math/exp/exp_fp/fp8x23.cairo index a31cc2ea8..6366f025f 100644 --- a/src/operators/tensor/math/exp/exp_fp/fp8x23.cairo +++ b/src/operators/tensor/math/exp/exp_fp/fp8x23.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::exp docstring -fn exp(self: @Tensor) -> Tensor { +fn exp(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::exp(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).exp()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/exp/exp_i32/core.cairo b/src/operators/tensor/math/exp/exp_i32/core.cairo index 6e4742a14..b856f0f14 100644 --- a/src/operators/tensor/math/exp/exp_i32/core.cairo +++ b/src/operators/tensor/math/exp/exp_i32/core.cairo @@ -9,11 +9,11 @@ fn exp_i32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::exp(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::exp(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::exp(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::exp(*self)), }, - Option::None(_) => Option::Some(fp16x16::exp(self)), + Option::None(_) => Option::Some(fp16x16::exp(*self)), }, - Option::None(_) => Option::Some(fp16x16::exp(self)), + Option::None(_) => Option::Some(fp16x16::exp(*self)), } } diff --git a/src/operators/tensor/math/exp/exp_i32/fp16x16.cairo b/src/operators/tensor/math/exp/exp_i32/fp16x16.cairo index 1f9f51707..10a6dce0e 100644 --- a/src/operators/tensor/math/exp/exp_i32/fp16x16.cairo +++ b/src/operators/tensor/math/exp/exp_i32/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::exp docstring -fn exp(self: @Tensor) -> Tensor { +fn exp(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele.mag, ele.sign); - result.append(FixedTrait::exp(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).exp()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/exp/exp_i32/fp8x23.cairo b/src/operators/tensor/math/exp/exp_i32/fp8x23.cairo index c05ec2530..e8bc126f1 100644 --- a/src/operators/tensor/math/exp/exp_i32/fp8x23.cairo +++ b/src/operators/tensor/math/exp/exp_i32/fp8x23.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::exp docstring -fn exp(self: @Tensor) -> Tensor { +fn exp(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele.mag, ele.sign); - result.append(FixedTrait::exp(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).exp()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/exp/exp_i8/core.cairo b/src/operators/tensor/math/exp/exp_i8/core.cairo index bbfb00d7e..d8b2a129b 100644 --- a/src/operators/tensor/math/exp/exp_i8/core.cairo +++ b/src/operators/tensor/math/exp/exp_i8/core.cairo @@ -9,11 +9,11 @@ fn exp_i8(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::exp(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::exp(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::exp(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::exp(*self)), }, - Option::None(_) => Option::Some(fp16x16::exp(self)), + Option::None(_) => Option::Some(fp16x16::exp(*self)), }, - Option::None(_) => Option::Some(fp16x16::exp(self)), + Option::None(_) => Option::Some(fp16x16::exp(*self)), } } diff --git a/src/operators/tensor/math/exp/exp_i8/fp16x16.cairo b/src/operators/tensor/math/exp/exp_i8/fp16x16.cairo index 0f21f9ae3..bcb904813 100644 --- a/src/operators/tensor/math/exp/exp_i8/fp16x16.cairo +++ b/src/operators/tensor/math/exp/exp_i8/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::exp docstring -fn exp(self: @Tensor) -> Tensor { +fn exp(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::exp(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).exp()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/exp/exp_i8/fp8x23.cairo b/src/operators/tensor/math/exp/exp_i8/fp8x23.cairo index 8486e745f..c0cf4588d 100644 --- a/src/operators/tensor/math/exp/exp_i8/fp8x23.cairo +++ b/src/operators/tensor/math/exp/exp_i8/fp8x23.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::exp docstring -fn exp(self: @Tensor) -> Tensor { +fn exp(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::exp(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).exp()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/exp/exp_u32/core.cairo b/src/operators/tensor/math/exp/exp_u32/core.cairo index c84baacde..9761775d0 100644 --- a/src/operators/tensor/math/exp/exp_u32/core.cairo +++ b/src/operators/tensor/math/exp/exp_u32/core.cairo @@ -9,11 +9,11 @@ fn exp_u32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::exp(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::exp(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::exp(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::exp(*self)), }, - Option::None(_) => Option::Some(fp16x16::exp(self)), + Option::None(_) => Option::Some(fp16x16::exp(*self)), }, - Option::None(_) => Option::Some(fp16x16::exp(self)), + Option::None(_) => Option::Some(fp16x16::exp(*self)), } } diff --git a/src/operators/tensor/math/exp/exp_u32/fp16x16.cairo b/src/operators/tensor/math/exp/exp_u32/fp16x16.cairo index 82052e16d..ec5c87381 100644 --- a/src/operators/tensor/math/exp/exp_u32/fp16x16.cairo +++ b/src/operators/tensor/math/exp/exp_u32/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::exp docstring -fn exp(self: @Tensor) -> Tensor { +fn exp(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele, false); - result.append(FixedTrait::exp(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).exp()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/exp/exp_u32/fp8x23.cairo b/src/operators/tensor/math/exp/exp_u32/fp8x23.cairo index 431ab10ef..ab9d8cd68 100644 --- a/src/operators/tensor/math/exp/exp_u32/fp8x23.cairo +++ b/src/operators/tensor/math/exp/exp_u32/fp8x23.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::exp docstring -fn exp(self: @Tensor) -> Tensor { +fn exp(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele, false); - result.append(FixedTrait::exp(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).exp()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } From 66a6af4205ade2ddd59d95cca1d1eb0d7d851768 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 09:06:53 +0300 Subject: [PATCH 13/30] refactor log --- .../tensor/math/log/log_fp/core.cairo | 8 ++++---- .../tensor/math/log/log_fp/fp16x16.cairo | 18 +++++++++--------- .../tensor/math/log/log_fp/fp8x23.cairo | 17 +++++++++-------- .../tensor/math/log/log_i32/core.cairo | 8 ++++---- .../tensor/math/log/log_i32/fp16x16.cairo | 18 +++++++++--------- .../tensor/math/log/log_i32/fp8x23.cairo | 18 +++++++++--------- .../tensor/math/log/log_i8/core.cairo | 8 ++++---- .../tensor/math/log/log_i8/fp16x16.cairo | 18 +++++++++--------- .../tensor/math/log/log_i8/fp8x23.cairo | 19 ++++++++++--------- .../tensor/math/log/log_u32/core.cairo | 8 ++++---- .../tensor/math/log/log_u32/fp16x16.cairo | 18 +++++++++--------- .../tensor/math/log/log_u32/fp8x23.cairo | 18 +++++++++--------- 12 files changed, 89 insertions(+), 87 deletions(-) diff --git a/src/operators/tensor/math/log/log_fp/core.cairo b/src/operators/tensor/math/log/log_fp/core.cairo index f6094f957..f92f64827 100644 --- a/src/operators/tensor/math/log/log_fp/core.cairo +++ b/src/operators/tensor/math/log/log_fp/core.cairo @@ -8,11 +8,11 @@ fn log(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::log(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::log(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::log(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::log(*self)), }, - Option::None(_) => Option::Some(fp16x16::log(self)), + Option::None(_) => Option::Some(fp16x16::log(*self)), }, - Option::None(_) => Option::Some(fp16x16::log(self)), + Option::None(_) => Option::Some(fp16x16::log(*self)), } } diff --git a/src/operators/tensor/math/log/log_fp/fp16x16.cairo b/src/operators/tensor/math/log/log_fp/fp16x16.cairo index eec775818..3c88dc152 100644 --- a/src/operators/tensor/math/log/log_fp/fp16x16.cairo +++ b/src/operators/tensor/math/log/log_fp/fp16x16.cairo @@ -7,20 +7,20 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; - /// Cf: TensorTrait::log docstring -fn log(self: @Tensor) -> Tensor { +fn log(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::ln(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).ln()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/log/log_fp/fp8x23.cairo b/src/operators/tensor/math/log/log_fp/fp8x23.cairo index 946bf9b89..96eb88d0b 100644 --- a/src/operators/tensor/math/log/log_fp/fp8x23.cairo +++ b/src/operators/tensor/math/log/log_fp/fp8x23.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::log docstring -fn log(self: @Tensor) -> Tensor { +fn log(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::ln(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).ln()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/log/log_i32/core.cairo b/src/operators/tensor/math/log/log_i32/core.cairo index 55c1be66e..aadd63f47 100644 --- a/src/operators/tensor/math/log/log_i32/core.cairo +++ b/src/operators/tensor/math/log/log_i32/core.cairo @@ -9,11 +9,11 @@ fn log_i32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::log(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::log(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::log(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::log(*self)), }, - Option::None(_) => Option::Some(fp16x16::log(self)), + Option::None(_) => Option::Some(fp16x16::log(*self)), }, - Option::None(_) => Option::Some(fp16x16::log(self)), + Option::None(_) => Option::Some(fp16x16::log(*self)), } } diff --git a/src/operators/tensor/math/log/log_i32/fp16x16.cairo b/src/operators/tensor/math/log/log_i32/fp16x16.cairo index d07c9288c..de356ccef 100644 --- a/src/operators/tensor/math/log/log_i32/fp16x16.cairo +++ b/src/operators/tensor/math/log/log_i32/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::log docstring -fn log(self: @Tensor) -> Tensor { +fn log(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele.mag, ele.sign); - result.append(FixedTrait::ln(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).ln()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/log/log_i32/fp8x23.cairo b/src/operators/tensor/math/log/log_i32/fp8x23.cairo index 33dbe049c..a761b48de 100644 --- a/src/operators/tensor/math/log/log_i32/fp8x23.cairo +++ b/src/operators/tensor/math/log/log_i32/fp8x23.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::log docstring -fn log(self: @Tensor) -> Tensor { +fn log(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele.mag, ele.sign); - result.append(FixedTrait::ln(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).ln()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/log/log_i8/core.cairo b/src/operators/tensor/math/log/log_i8/core.cairo index f4441fd9b..0b63f263f 100644 --- a/src/operators/tensor/math/log/log_i8/core.cairo +++ b/src/operators/tensor/math/log/log_i8/core.cairo @@ -9,11 +9,11 @@ fn log_i8(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::log(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::log(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::log(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::log(*self)), }, - Option::None(_) => Option::Some(fp16x16::log(self)), + Option::None(_) => Option::Some(fp16x16::log(*self)), }, - Option::None(_) => Option::Some(fp16x16::log(self)), + Option::None(_) => Option::Some(fp16x16::log(*self)), } } diff --git a/src/operators/tensor/math/log/log_i8/fp16x16.cairo b/src/operators/tensor/math/log/log_i8/fp16x16.cairo index 70ef8d02a..f3a06364b 100644 --- a/src/operators/tensor/math/log/log_i8/fp16x16.cairo +++ b/src/operators/tensor/math/log/log_i8/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::log docstring -fn log(self: @Tensor) -> Tensor { +fn log(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::ln(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).ln()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/log/log_i8/fp8x23.cairo b/src/operators/tensor/math/log/log_i8/fp8x23.cairo index e8a428bb6..14cbe2349 100644 --- a/src/operators/tensor/math/log/log_i8/fp8x23.cairo +++ b/src/operators/tensor/math/log/log_i8/fp8x23.cairo @@ -11,19 +11,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::log docstring -fn log(self: @Tensor) -> Tensor { +fn log(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::ln(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).ln()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/log/log_u32/core.cairo b/src/operators/tensor/math/log/log_u32/core.cairo index 010a8cc78..c77f0c408 100644 --- a/src/operators/tensor/math/log/log_u32/core.cairo +++ b/src/operators/tensor/math/log/log_u32/core.cairo @@ -8,11 +8,11 @@ fn log_u32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::log(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::log(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::log(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::log(*self)), }, - Option::None(_) => Option::Some(fp16x16::log(self)), + Option::None(_) => Option::Some(fp16x16::log(*self)), }, - Option::None(_) => Option::Some(fp16x16::log(self)), + Option::None(_) => Option::Some(fp16x16::log(*self)), } } diff --git a/src/operators/tensor/math/log/log_u32/fp16x16.cairo b/src/operators/tensor/math/log/log_u32/fp16x16.cairo index a5b7591f4..152c95a6f 100644 --- a/src/operators/tensor/math/log/log_u32/fp16x16.cairo +++ b/src/operators/tensor/math/log/log_u32/fp16x16.cairo @@ -12,19 +12,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::log docstring -fn log(self: @Tensor) -> Tensor { +fn log(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele, false); - result.append(FixedTrait::ln(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).ln()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/log/log_u32/fp8x23.cairo b/src/operators/tensor/math/log/log_u32/fp8x23.cairo index 538724897..28b90e001 100644 --- a/src/operators/tensor/math/log/log_u32/fp8x23.cairo +++ b/src/operators/tensor/math/log/log_u32/fp8x23.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::log docstring -fn log(self: @Tensor) -> Tensor { +fn log(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - let ele = FixedTrait::new_unscaled(ele, false); - result.append(FixedTrait::ln(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).ln()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } From dc93cd84f5b42911ab91ef038a05713679ac447d Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 09:17:59 +0300 Subject: [PATCH 14/30] refactor max min --- .../tensor/math/max/max_fp/fp16x16.cairo | 21 +++++++++++-------- .../tensor/math/max/max_fp/fp8x23.cairo | 19 +++++++++-------- src/operators/tensor/math/max/max_i32.cairo | 19 +++++++++-------- src/operators/tensor/math/max/max_i8.cairo | 19 +++++++++-------- src/operators/tensor/math/max/max_u32.cairo | 18 +++++++++------- .../tensor/math/min/min_fp/fp16x16.cairo | 20 ++++++++++-------- .../tensor/math/min/min_fp/fp8x23.cairo | 19 +++++++++-------- src/operators/tensor/math/min/min_i32.cairo | 20 ++++++++++-------- src/operators/tensor/math/min/min_i8.cairo | 19 +++++++++-------- src/operators/tensor/math/min/min_u32.cairo | 18 +++++++++------- 10 files changed, 104 insertions(+), 88 deletions(-) diff --git a/src/operators/tensor/math/max/max_fp/fp16x16.cairo b/src/operators/tensor/math/max/max_fp/fp16x16.cairo index 4464d2ea3..74c5c4b20 100644 --- a/src/operators/tensor/math/max/max_fp/fp16x16.cairo +++ b/src/operators/tensor/math/max/max_fp/fp16x16.cairo @@ -12,17 +12,20 @@ fn max_in_tensor(mut vec: Span::) -> FixedType { let mut max_value: FixedType = FixedTrait::new(MAX, true); loop { - let current_value = *vec.pop_front().unwrap(); - - let check_max = max(max_value, current_value); - if (max_value < check_max) { - max_value = check_max; - } - - if vec.len() == 0 { - break (); + match vec.pop_front() { + Option::Some(item) => { + let check_max = max(max_value, *item); + if (max_value < check_max) { + max_value = check_max; + } + }, + Option::None(_) => { + break; + } }; }; return max_value; } + + diff --git a/src/operators/tensor/math/max/max_fp/fp8x23.cairo b/src/operators/tensor/math/max/max_fp/fp8x23.cairo index 775244dcc..937fce55b 100644 --- a/src/operators/tensor/math/max/max_fp/fp8x23.cairo +++ b/src/operators/tensor/math/max/max_fp/fp8x23.cairo @@ -11,15 +11,16 @@ fn max_in_tensor(mut vec: Span::) -> FixedType { let mut max_value: FixedType = FixedTrait::new(MAX, true); loop { - let current_value = *vec.pop_front().unwrap(); - - let check_max = max(max_value, current_value); - if (max_value < check_max) { - max_value = check_max; - } - - if vec.len() == 0 { - break (); + match vec.pop_front() { + Option::Some(item) => { + let check_max = max(max_value, *item); + if (max_value < check_max) { + max_value = check_max; + } + }, + Option::None(_) => { + break; + } }; }; diff --git a/src/operators/tensor/math/max/max_i32.cairo b/src/operators/tensor/math/max/max_i32.cairo index d89cedb75..0812544c6 100644 --- a/src/operators/tensor/math/max/max_i32.cairo +++ b/src/operators/tensor/math/max/max_i32.cairo @@ -9,15 +9,16 @@ fn max_in_tensor(mut vec: Span::) -> i32 { let mut max_value: i32 = IntegerTrait::new(2147483647, true); loop { - let current_value = *vec.pop_front().unwrap(); - - let check_max = max_value.max(current_value); - if (max_value < check_max) { - max_value = check_max; - } - - if vec.len() == 0 { - break (); + match vec.pop_front() { + Option::Some(item) => { + let check_max = max_value.max(*item); + if (max_value < check_max) { + max_value = check_max; + } + }, + Option::None(_) => { + break; + } }; }; diff --git a/src/operators/tensor/math/max/max_i8.cairo b/src/operators/tensor/math/max/max_i8.cairo index 66757f8c7..2bf17c935 100644 --- a/src/operators/tensor/math/max/max_i8.cairo +++ b/src/operators/tensor/math/max/max_i8.cairo @@ -9,15 +9,16 @@ fn max_in_tensor(mut vec: Span::) -> i8 { let mut max_value: i8 = IntegerTrait::new(128, true); loop { - let current_value = *vec.pop_front().unwrap(); - - let check_max = max_value.max(current_value); - if (max_value < check_max) { - max_value = check_max; - } - - if vec.len() == 0 { - break (); + match vec.pop_front() { + Option::Some(item) => { + let check_max = max_value.max(*item); + if (max_value < check_max) { + max_value = check_max; + } + }, + Option::None(_) => { + break; + } }; }; diff --git a/src/operators/tensor/math/max/max_u32.cairo b/src/operators/tensor/math/max/max_u32.cairo index e67aec39d..994c5fb13 100644 --- a/src/operators/tensor/math/max/max_u32.cairo +++ b/src/operators/tensor/math/max/max_u32.cairo @@ -5,15 +5,17 @@ use option::OptionTrait; /// Cf: TensorTrait::max docstring fn max_in_tensor(mut vec: Span::) -> u32 { let mut max_value = 0; - loop { - let current_value = *vec.pop_front().unwrap(); - - if (max_value < current_value) { - max_value = current_value; - } - if vec.len() == 0 { - break (); + loop { + match vec.pop_front() { + Option::Some(item) => { + if (max_value < *item) { + max_value = *item; + } + }, + Option::None(_) => { + break; + } }; }; diff --git a/src/operators/tensor/math/min/min_fp/fp16x16.cairo b/src/operators/tensor/math/min/min_fp/fp16x16.cairo index a0cd3c0dd..3b3e43c62 100644 --- a/src/operators/tensor/math/min/min_fp/fp16x16.cairo +++ b/src/operators/tensor/math/min/min_fp/fp16x16.cairo @@ -12,17 +12,19 @@ fn min_in_tensor(mut vec: Span::) -> FixedType { let mut min_value: FixedType = FixedTrait::new(MAX - 1, false); loop { - let current_value = *vec.pop_front().unwrap(); - - let check_min = min(min_value, current_value); - if (min_value > check_min) { - min_value = check_min; - } - - if vec.len() == 0 { - break (); + match vec.pop_front() { + Option::Some(item) => { + let check_min = min(min_value, *item); + if (min_value > check_min) { + min_value = check_min; + } + }, + Option::None(_) => { + break; + } }; }; return min_value; } + diff --git a/src/operators/tensor/math/min/min_fp/fp8x23.cairo b/src/operators/tensor/math/min/min_fp/fp8x23.cairo index 46b85e37f..4bdb2d887 100644 --- a/src/operators/tensor/math/min/min_fp/fp8x23.cairo +++ b/src/operators/tensor/math/min/min_fp/fp8x23.cairo @@ -11,15 +11,16 @@ fn min_in_tensor(mut vec: Span::) -> FixedType { let mut min_value: FixedType = FixedTrait::new(MAX - 1, false); loop { - let current_value = *vec.pop_front().unwrap(); - - let check_min = min(min_value, current_value); - if (min_value > check_min) { - min_value = check_min; - } - - if vec.len() == 0 { - break (); + match vec.pop_front() { + Option::Some(item) => { + let check_min = min(min_value, *item); + if (min_value > check_min) { + min_value = check_min; + } + }, + Option::None(_) => { + break; + } }; }; diff --git a/src/operators/tensor/math/min/min_i32.cairo b/src/operators/tensor/math/min/min_i32.cairo index 92f6e1091..730c96e13 100644 --- a/src/operators/tensor/math/min/min_i32.cairo +++ b/src/operators/tensor/math/min/min_i32.cairo @@ -9,17 +9,19 @@ fn min_in_tensor(mut vec: Span::) -> i32 { let mut min_value: i32 = IntegerTrait::new(2147483647, false); loop { - let current_value = *vec.pop_front().unwrap(); - - let check_min = min_value.min(current_value); - if (min_value > check_min) { - min_value = check_min; - } - - if vec.len() == 0 { - break (); + match vec.pop_front() { + Option::Some(item) => { + let check_min = min_value.min(*item); + if (min_value > check_min) { + min_value = check_min; + } + }, + Option::None(_) => { + break; + } }; }; return min_value; } + diff --git a/src/operators/tensor/math/min/min_i8.cairo b/src/operators/tensor/math/min/min_i8.cairo index 749e700b1..14534a167 100644 --- a/src/operators/tensor/math/min/min_i8.cairo +++ b/src/operators/tensor/math/min/min_i8.cairo @@ -9,15 +9,16 @@ fn min_in_tensor(mut vec: Span::) -> i8 { let mut min_value: i8 = IntegerTrait::new(127, false); loop { - let current_value = *vec.pop_front().unwrap(); - - let check_min = min_value.min(current_value); - if (min_value > check_min) { - min_value = check_min; - } - - if vec.len() == 0 { - break (); + match vec.pop_front() { + Option::Some(item) => { + let check_min = min_value.min(*item); + if (min_value > check_min) { + min_value = check_min; + } + }, + Option::None(_) => { + break; + } }; }; diff --git a/src/operators/tensor/math/min/min_u32.cairo b/src/operators/tensor/math/min/min_u32.cairo index 0d3cc3ed7..e842111f3 100644 --- a/src/operators/tensor/math/min/min_u32.cairo +++ b/src/operators/tensor/math/min/min_u32.cairo @@ -7,15 +7,17 @@ use orion::operators::tensor::implementations::impl_tensor_u32::Tensor_u32; /// Cf: TensorTrait::min docstring fn min_in_tensor(mut vec: Span::) -> u32 { let mut min_value = 4294967295; - loop { - let current_value = *vec.pop_front().unwrap(); - - if (min_value > current_value) { - min_value = current_value; - } - if vec.len() == 0 { - break (); + loop { + match vec.pop_front() { + Option::Some(item) => { + if (min_value > *item) { + min_value = *item; + } + }, + Option::None(_) => { + break; + } }; }; From b1f0ad9ce78aa3d23a8023622950e09aa37ec265 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 09:21:42 +0300 Subject: [PATCH 15/30] refactor reduce sum --- .../math/reduce_sum/reduce_sum_fp/fp16x16.cairo | 13 ++++++++----- .../math/reduce_sum/reduce_sum_fp/fp8x23.cairo | 13 ++++++++----- .../tensor/math/reduce_sum/reduce_sum_i32.cairo | 13 ++++++++----- .../tensor/math/reduce_sum/reduce_sum_i8.cairo | 13 ++++++++----- .../tensor/math/reduce_sum/reduce_sum_u32.cairo | 13 ++++++++----- 5 files changed, 40 insertions(+), 25 deletions(-) diff --git a/src/operators/tensor/math/reduce_sum/reduce_sum_fp/fp16x16.cairo b/src/operators/tensor/math/reduce_sum/reduce_sum_fp/fp16x16.cairo index ad8d54d75..91b11a853 100644 --- a/src/operators/tensor/math/reduce_sum/reduce_sum_fp/fp16x16.cairo +++ b/src/operators/tensor/math/reduce_sum/reduce_sum_fp/fp16x16.cairo @@ -88,11 +88,14 @@ fn accumulate_sum( }; } else { loop { - if input_data.len() == 0 { - break (); - } - - acc += *input_data.pop_front().unwrap(); + match input_data.pop_front() { + Option::Some(item) => { + acc += *item; + }, + Option::None(_) => { + break; + } + }; }; } diff --git a/src/operators/tensor/math/reduce_sum/reduce_sum_fp/fp8x23.cairo b/src/operators/tensor/math/reduce_sum/reduce_sum_fp/fp8x23.cairo index 723193121..869805fe1 100644 --- a/src/operators/tensor/math/reduce_sum/reduce_sum_fp/fp8x23.cairo +++ b/src/operators/tensor/math/reduce_sum/reduce_sum_fp/fp8x23.cairo @@ -88,11 +88,14 @@ fn accumulate_sum( }; } else { loop { - if input_data.len() == 0 { - break (); - } - - acc += *input_data.pop_front().unwrap(); + match input_data.pop_front() { + Option::Some(item) => { + acc += *item; + }, + Option::None(_) => { + break; + } + }; }; } diff --git a/src/operators/tensor/math/reduce_sum/reduce_sum_i32.cairo b/src/operators/tensor/math/reduce_sum/reduce_sum_i32.cairo index 9f33d44d8..8ce5922f6 100644 --- a/src/operators/tensor/math/reduce_sum/reduce_sum_i32.cairo +++ b/src/operators/tensor/math/reduce_sum/reduce_sum_i32.cairo @@ -83,11 +83,14 @@ fn accumulate_sum( }; } else { loop { - if input_data.len() == 0 { - break (); - } - - acc += *input_data.pop_front().unwrap(); + match input_data.pop_front() { + Option::Some(item) => { + acc += *item; + }, + Option::None(_) => { + break; + } + }; }; } diff --git a/src/operators/tensor/math/reduce_sum/reduce_sum_i8.cairo b/src/operators/tensor/math/reduce_sum/reduce_sum_i8.cairo index 0557eceae..d5e51fdcf 100644 --- a/src/operators/tensor/math/reduce_sum/reduce_sum_i8.cairo +++ b/src/operators/tensor/math/reduce_sum/reduce_sum_i8.cairo @@ -83,11 +83,14 @@ fn accumulate_sum( }; } else { loop { - if input_data.len() == 0 { - break (); - } - - acc += *input_data.pop_front().unwrap(); + match input_data.pop_front() { + Option::Some(item) => { + acc += *item; + }, + Option::None(_) => { + break; + } + }; }; } diff --git a/src/operators/tensor/math/reduce_sum/reduce_sum_u32.cairo b/src/operators/tensor/math/reduce_sum/reduce_sum_u32.cairo index d4f632a11..acad569fc 100644 --- a/src/operators/tensor/math/reduce_sum/reduce_sum_u32.cairo +++ b/src/operators/tensor/math/reduce_sum/reduce_sum_u32.cairo @@ -82,11 +82,14 @@ fn accumulate_sum( }; } else { loop { - if input_data.len() == 0 { - break (); - } - - acc += *input_data.pop_front().unwrap(); + match input_data.pop_front() { + Option::Some(item) => { + acc += *item; + }, + Option::None(_) => { + break; + } + }; }; } From 5e1ae6d840355e6e78ed402a98c9642912883f93 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 09:32:31 +0300 Subject: [PATCH 16/30] refactor sin sinh --- .../tensor/math/sin/sin_fp/core.cairo | 8 +++--- .../tensor/math/sin/sin_fp/fp16x16.cairo | 17 ++++++------ .../tensor/math/sin/sin_fp/fp8x23.cairo | 18 +++++++------ .../tensor/math/sin/sin_i32/core.cairo | 8 +++--- .../tensor/math/sin/sin_i32/fp16x16.cairo | 26 +++++++------------ .../tensor/math/sin/sin_i32/fp8x23.cairo | 25 +++++++----------- .../tensor/math/sin/sin_i8/core.cairo | 8 +++--- .../tensor/math/sin/sin_i8/fp16x16.cairo | 26 +++++++------------ .../tensor/math/sin/sin_i8/fp8x23.cairo | 24 +++++++---------- .../tensor/math/sin/sin_u32/core.cairo | 8 +++--- .../tensor/math/sin/sin_u32/fp16x16.cairo | 18 ++++++------- .../tensor/math/sin/sin_u32/fp8x23.cairo | 18 ++++++------- .../tensor/math/sinh/sinh_fp/core.cairo | 8 +++--- .../tensor/math/sinh/sinh_fp/fp16x16.cairo | 18 +++++++------ .../tensor/math/sinh/sinh_fp/fp8x23.cairo | 18 +++++++------ .../tensor/math/sinh/sinh_i32/core.cairo | 8 +++--- .../tensor/math/sinh/sinh_i32/fp16x16.cairo | 25 +++++++----------- .../tensor/math/sinh/sinh_i32/fp8x23.cairo | 25 +++++++----------- .../tensor/math/sinh/sinh_i8/core.cairo | 8 +++--- .../tensor/math/sinh/sinh_i8/fp16x16.cairo | 24 +++++++---------- .../tensor/math/sinh/sinh_i8/fp8x23.cairo | 26 +++++++------------ .../tensor/math/sinh/sinh_u32/core.cairo | 8 +++--- .../tensor/math/sinh/sinh_u32/fp16x16.cairo | 18 ++++++------- .../tensor/math/sinh/sinh_u32/fp8x23.cairo | 20 +++++++------- 24 files changed, 186 insertions(+), 224 deletions(-) diff --git a/src/operators/tensor/math/sin/sin_fp/core.cairo b/src/operators/tensor/math/sin/sin_fp/core.cairo index 37782248e..f761707af 100644 --- a/src/operators/tensor/math/sin/sin_fp/core.cairo +++ b/src/operators/tensor/math/sin/sin_fp/core.cairo @@ -8,11 +8,11 @@ fn sin(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sin(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sin(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sin(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sin(*self)), }, - Option::None(_) => Option::Some(fp16x16::sin(self)), + Option::None(_) => Option::Some(fp16x16::sin(*self)), }, - Option::None(_) => Option::Some(fp16x16::sin(self)), + Option::None(_) => Option::Some(fp16x16::sin(*self)), } } diff --git a/src/operators/tensor/math/sin/sin_fp/fp16x16.cairo b/src/operators/tensor/math/sin/sin_fp/fp16x16.cairo index f162e3be6..296eeed09 100644 --- a/src/operators/tensor/math/sin/sin_fp/fp16x16.cairo +++ b/src/operators/tensor/math/sin/sin_fp/fp16x16.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::sin docstring -fn sin(self: @Tensor) -> Tensor { +fn sin(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::sin(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).sin()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/sin/sin_fp/fp8x23.cairo b/src/operators/tensor/math/sin/sin_fp/fp8x23.cairo index 97123bb3e..31a39449b 100644 --- a/src/operators/tensor/math/sin/sin_fp/fp8x23.cairo +++ b/src/operators/tensor/math/sin/sin_fp/fp8x23.cairo @@ -9,18 +9,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::sin docstring -fn sin(self: @Tensor) -> Tensor { +fn sin(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::sin(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).sin()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/sin/sin_i32/core.cairo b/src/operators/tensor/math/sin/sin_i32/core.cairo index 88f7ffa30..4bcbd118e 100644 --- a/src/operators/tensor/math/sin/sin_i32/core.cairo +++ b/src/operators/tensor/math/sin/sin_i32/core.cairo @@ -9,11 +9,11 @@ fn sin_i32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sin(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sin(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sin(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sin(*self)), }, - Option::None(_) => Option::Some(fp16x16::sin(self)), + Option::None(_) => Option::Some(fp16x16::sin(*self)), }, - Option::None(_) => Option::Some(fp16x16::sin(self)), + Option::None(_) => Option::Some(fp16x16::sin(*self)), } } diff --git a/src/operators/tensor/math/sin/sin_i32/fp16x16.cairo b/src/operators/tensor/math/sin/sin_i32/fp16x16.cairo index 7aa56f2b8..b9b4032ad 100644 --- a/src/operators/tensor/math/sin/sin_i32/fp16x16.cairo +++ b/src/operators/tensor/math/sin/sin_i32/fp16x16.cairo @@ -9,27 +9,21 @@ use orion::numbers::signed_integer::i32::i32; use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; - /// Cf: TensorTrait::sin docstring -fn sin(self: @Tensor) -> Tensor { +fn sin(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sin(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sin(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).sin()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/sin/sin_i32/fp8x23.cairo b/src/operators/tensor/math/sin/sin_i32/fp8x23.cairo index 184338ec6..5510fd5d7 100644 --- a/src/operators/tensor/math/sin/sin_i32/fp8x23.cairo +++ b/src/operators/tensor/math/sin/sin_i32/fp8x23.cairo @@ -11,25 +11,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::sin docstring -fn sin(self: @Tensor) -> Tensor { +fn sin(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sin(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sin(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).sin()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/sin/sin_i8/core.cairo b/src/operators/tensor/math/sin/sin_i8/core.cairo index a68cf7bd8..f16fcea69 100644 --- a/src/operators/tensor/math/sin/sin_i8/core.cairo +++ b/src/operators/tensor/math/sin/sin_i8/core.cairo @@ -9,11 +9,11 @@ fn sin_i8(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sin(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sin(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sin(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sin(*self)), }, - Option::None(_) => Option::Some(fp16x16::sin(self)), + Option::None(_) => Option::Some(fp16x16::sin(*self)), }, - Option::None(_) => Option::Some(fp16x16::sin(self)), + Option::None(_) => Option::Some(fp16x16::sin(*self)), } } diff --git a/src/operators/tensor/math/sin/sin_i8/fp16x16.cairo b/src/operators/tensor/math/sin/sin_i8/fp16x16.cairo index 26fc42d27..c04d639a1 100644 --- a/src/operators/tensor/math/sin/sin_i8/fp16x16.cairo +++ b/src/operators/tensor/math/sin/sin_i8/fp16x16.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::sin docstring -fn sin(self: @Tensor) -> Tensor { +fn sin(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sin(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sin(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).sin()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); -} + return TensorTrait::::new(self.shape, result.span(), self.extra); +} \ No newline at end of file diff --git a/src/operators/tensor/math/sin/sin_i8/fp8x23.cairo b/src/operators/tensor/math/sin/sin_i8/fp8x23.cairo index af9b7d58c..badd8b4ee 100644 --- a/src/operators/tensor/math/sin/sin_i8/fp8x23.cairo +++ b/src/operators/tensor/math/sin/sin_i8/fp8x23.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::sin docstring -fn sin(self: @Tensor) -> Tensor { +fn sin(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sin(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sin(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).sin()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/sin/sin_u32/core.cairo b/src/operators/tensor/math/sin/sin_u32/core.cairo index 4ced3846b..c3cb26fa1 100644 --- a/src/operators/tensor/math/sin/sin_u32/core.cairo +++ b/src/operators/tensor/math/sin/sin_u32/core.cairo @@ -9,11 +9,11 @@ fn sin_u32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sin(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sin(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sin(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sin(*self)), }, - Option::None(_) => Option::Some(fp16x16::sin(self)), + Option::None(_) => Option::Some(fp16x16::sin(*self)), }, - Option::None(_) => Option::Some(fp16x16::sin(self)), + Option::None(_) => Option::Some(fp16x16::sin(*self)), } } diff --git a/src/operators/tensor/math/sin/sin_u32/fp16x16.cairo b/src/operators/tensor/math/sin/sin_u32/fp16x16.cairo index 60157d051..c784e5721 100644 --- a/src/operators/tensor/math/sin/sin_u32/fp16x16.cairo +++ b/src/operators/tensor/math/sin/sin_u32/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::sin docstring -fn sin(self: @Tensor) -> Tensor { +fn sin(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::sin(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).sin()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/sin/sin_u32/fp8x23.cairo b/src/operators/tensor/math/sin/sin_u32/fp8x23.cairo index b3ff83c80..0a70bd8dc 100644 --- a/src/operators/tensor/math/sin/sin_u32/fp8x23.cairo +++ b/src/operators/tensor/math/sin/sin_u32/fp8x23.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::sin docstring -fn sin(self: @Tensor) -> Tensor { +fn sin(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::sin(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).sin()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/sinh/sinh_fp/core.cairo b/src/operators/tensor/math/sinh/sinh_fp/core.cairo index 12258f1a7..affbc65f9 100644 --- a/src/operators/tensor/math/sinh/sinh_fp/core.cairo +++ b/src/operators/tensor/math/sinh/sinh_fp/core.cairo @@ -8,11 +8,11 @@ fn sinh(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sinh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sinh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sinh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::sinh(self)), + Option::None(_) => Option::Some(fp16x16::sinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::sinh(self)), + Option::None(_) => Option::Some(fp16x16::sinh(*self)), } } diff --git a/src/operators/tensor/math/sinh/sinh_fp/fp16x16.cairo b/src/operators/tensor/math/sinh/sinh_fp/fp16x16.cairo index 02886a975..9cfbc4219 100644 --- a/src/operators/tensor/math/sinh/sinh_fp/fp16x16.cairo +++ b/src/operators/tensor/math/sinh/sinh_fp/fp16x16.cairo @@ -9,18 +9,20 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::sinh docstring -fn sinh(self: @Tensor) -> Tensor { +fn sinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::sinh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).sinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/sinh/sinh_fp/fp8x23.cairo b/src/operators/tensor/math/sinh/sinh_fp/fp8x23.cairo index 5c83419d3..c4dc93141 100644 --- a/src/operators/tensor/math/sinh/sinh_fp/fp8x23.cairo +++ b/src/operators/tensor/math/sinh/sinh_fp/fp8x23.cairo @@ -9,18 +9,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::sinh docstring -fn sinh(self: @Tensor) -> Tensor { +fn sinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::sinh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).sinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/sinh/sinh_i32/core.cairo b/src/operators/tensor/math/sinh/sinh_i32/core.cairo index c07afdb59..bfe7dd0a4 100644 --- a/src/operators/tensor/math/sinh/sinh_i32/core.cairo +++ b/src/operators/tensor/math/sinh/sinh_i32/core.cairo @@ -9,11 +9,11 @@ fn sinh_i32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sinh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sinh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sinh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::sinh(self)), + Option::None(_) => Option::Some(fp16x16::sinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::sinh(self)), + Option::None(_) => Option::Some(fp16x16::sinh(*self)), } } diff --git a/src/operators/tensor/math/sinh/sinh_i32/fp16x16.cairo b/src/operators/tensor/math/sinh/sinh_i32/fp16x16.cairo index 95caba0c1..0a7899a61 100644 --- a/src/operators/tensor/math/sinh/sinh_i32/fp16x16.cairo +++ b/src/operators/tensor/math/sinh/sinh_i32/fp16x16.cairo @@ -11,25 +11,20 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::sinh docstring -fn sinh(self: @Tensor) -> Tensor { +fn sinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sinh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sinh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).sinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/sinh/sinh_i32/fp8x23.cairo b/src/operators/tensor/math/sinh/sinh_i32/fp8x23.cairo index 8bd4edecf..1af870b89 100644 --- a/src/operators/tensor/math/sinh/sinh_i32/fp8x23.cairo +++ b/src/operators/tensor/math/sinh/sinh_i32/fp8x23.cairo @@ -11,25 +11,20 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::sinh docstring -fn sinh(self: @Tensor) -> Tensor { +fn sinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sinh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sinh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).sinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/sinh/sinh_i8/core.cairo b/src/operators/tensor/math/sinh/sinh_i8/core.cairo index 60c996912..a77278c6d 100644 --- a/src/operators/tensor/math/sinh/sinh_i8/core.cairo +++ b/src/operators/tensor/math/sinh/sinh_i8/core.cairo @@ -9,11 +9,11 @@ fn sinh_i8(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sinh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sinh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sinh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::sinh(self)), + Option::None(_) => Option::Some(fp16x16::sinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::sinh(self)), + Option::None(_) => Option::Some(fp16x16::sinh(*self)), } } diff --git a/src/operators/tensor/math/sinh/sinh_i8/fp16x16.cairo b/src/operators/tensor/math/sinh/sinh_i8/fp16x16.cairo index 68dd5c732..c37bbcb16 100644 --- a/src/operators/tensor/math/sinh/sinh_i8/fp16x16.cairo +++ b/src/operators/tensor/math/sinh/sinh_i8/fp16x16.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::sinh docstring -fn sinh(self: @Tensor) -> Tensor { +fn sinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sinh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sinh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).sinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/sinh/sinh_i8/fp8x23.cairo b/src/operators/tensor/math/sinh/sinh_i8/fp8x23.cairo index 98fb39905..af3e8caea 100644 --- a/src/operators/tensor/math/sinh/sinh_i8/fp8x23.cairo +++ b/src/operators/tensor/math/sinh/sinh_i8/fp8x23.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::sinh docstring -fn sinh(self: @Tensor) -> Tensor { +fn sinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sinh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sinh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).sinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); -} + return TensorTrait::::new(self.shape, result.span(), self.extra); +} \ No newline at end of file diff --git a/src/operators/tensor/math/sinh/sinh_u32/core.cairo b/src/operators/tensor/math/sinh/sinh_u32/core.cairo index 27fffa843..2fa9b82d1 100644 --- a/src/operators/tensor/math/sinh/sinh_u32/core.cairo +++ b/src/operators/tensor/math/sinh/sinh_u32/core.cairo @@ -9,11 +9,11 @@ fn sinh_u32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sinh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sinh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sinh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::sinh(self)), + Option::None(_) => Option::Some(fp16x16::sinh(*self)), }, - Option::None(_) => Option::Some(fp16x16::sinh(self)), + Option::None(_) => Option::Some(fp16x16::sinh(*self)), } } diff --git a/src/operators/tensor/math/sinh/sinh_u32/fp16x16.cairo b/src/operators/tensor/math/sinh/sinh_u32/fp16x16.cairo index e73e140b7..3e406a877 100644 --- a/src/operators/tensor/math/sinh/sinh_u32/fp16x16.cairo +++ b/src/operators/tensor/math/sinh/sinh_u32/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::sinh docstring -fn sinh(self: @Tensor) -> Tensor { +fn sinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::sinh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).sinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/sinh/sinh_u32/fp8x23.cairo b/src/operators/tensor/math/sinh/sinh_u32/fp8x23.cairo index 8adcaca11..fb3c8e42c 100644 --- a/src/operators/tensor/math/sinh/sinh_u32/fp8x23.cairo +++ b/src/operators/tensor/math/sinh/sinh_u32/fp8x23.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::sinh docstring -fn sinh(self: @Tensor) -> Tensor { +fn sinh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::sinh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).sinh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); -} + return TensorTrait::::new(self.shape, result.span(), self.extra); +} \ No newline at end of file From 06fc07d33e45f8e7dfe87739a587440c02afe213 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 09:38:58 +0300 Subject: [PATCH 17/30] refactor sqrt --- .../tensor/math/sqrt/sqrt_fp/core.cairo | 8 +++---- .../tensor/math/sqrt/sqrt_fp/fp16x16.cairo | 19 ++++++++------- .../tensor/math/sqrt/sqrt_fp/fp8x23.cairo | 17 ++++++------- .../tensor/math/sqrt/sqrt_i32/core.cairo | 8 +++---- .../tensor/math/sqrt/sqrt_i32/fp16x16.cairo | 24 +++++++------------ .../tensor/math/sqrt/sqrt_i32/fp8x23.cairo | 24 +++++++------------ .../tensor/math/sqrt/sqrt_i8/core.cairo | 8 +++---- .../tensor/math/sqrt/sqrt_i8/fp16x16.cairo | 24 +++++++------------ .../tensor/math/sqrt/sqrt_i8/fp8x23.cairo | 24 +++++++------------ .../tensor/math/sqrt/sqrt_u32/core.cairo | 10 ++++---- .../tensor/math/sqrt/sqrt_u32/fp16x16.cairo | 18 +++++++------- .../tensor/math/sqrt/sqrt_u32/fp8x23.cairo | 18 +++++++------- 12 files changed, 90 insertions(+), 112 deletions(-) diff --git a/src/operators/tensor/math/sqrt/sqrt_fp/core.cairo b/src/operators/tensor/math/sqrt/sqrt_fp/core.cairo index 32355fec6..676d07a3a 100644 --- a/src/operators/tensor/math/sqrt/sqrt_fp/core.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_fp/core.cairo @@ -8,11 +8,11 @@ fn sqrt(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sqrt(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sqrt(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sqrt(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sqrt(*self)), }, - Option::None(_) => Option::Some(fp16x16::sqrt(self)), + Option::None(_) => Option::Some(fp16x16::sqrt(*self)), }, - Option::None(_) => Option::Some(fp16x16::sqrt(self)), + Option::None(_) => Option::Some(fp16x16::sqrt(*self)), } } diff --git a/src/operators/tensor/math/sqrt/sqrt_fp/fp16x16.cairo b/src/operators/tensor/math/sqrt/sqrt_fp/fp16x16.cairo index a29dbf9e1..136f4ddfa 100644 --- a/src/operators/tensor/math/sqrt/sqrt_fp/fp16x16.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_fp/fp16x16.cairo @@ -8,18 +8,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; -fn sqrt(self: @Tensor) -> Tensor { +fn sqrt(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::sqrt(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).sqrt()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); -} \ No newline at end of file + return TensorTrait::::new(self.shape, result.span(), self.extra); +} diff --git a/src/operators/tensor/math/sqrt/sqrt_fp/fp8x23.cairo b/src/operators/tensor/math/sqrt/sqrt_fp/fp8x23.cairo index a64f0e4f5..50125089f 100644 --- a/src/operators/tensor/math/sqrt/sqrt_fp/fp8x23.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_fp/fp8x23.cairo @@ -8,18 +8,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; -fn sqrt(self: @Tensor) -> Tensor { +fn sqrt(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::sqrt(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).sqrt()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } \ No newline at end of file diff --git a/src/operators/tensor/math/sqrt/sqrt_i32/core.cairo b/src/operators/tensor/math/sqrt/sqrt_i32/core.cairo index 4189a3c63..f13d02904 100644 --- a/src/operators/tensor/math/sqrt/sqrt_i32/core.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_i32/core.cairo @@ -8,11 +8,11 @@ fn sqrt_i32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sqrt(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sqrt(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sqrt(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sqrt(*self)), }, - Option::None(_) => Option::Some(fp16x16::sqrt(self)), + Option::None(_) => Option::Some(fp16x16::sqrt(*self)), }, - Option::None(_) => Option::Some(fp16x16::sqrt(self)), + Option::None(_) => Option::Some(fp16x16::sqrt(*self)), } } \ No newline at end of file diff --git a/src/operators/tensor/math/sqrt/sqrt_i32/fp16x16.cairo b/src/operators/tensor/math/sqrt/sqrt_i32/fp16x16.cairo index 861545a47..46c71dc8f 100644 --- a/src/operators/tensor/math/sqrt/sqrt_i32/fp16x16.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_i32/fp16x16.cairo @@ -10,25 +10,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; -fn sqrt(self: @Tensor) -> Tensor { +fn sqrt(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sqrt(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sqrt(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).sqrt()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } \ No newline at end of file diff --git a/src/operators/tensor/math/sqrt/sqrt_i32/fp8x23.cairo b/src/operators/tensor/math/sqrt/sqrt_i32/fp8x23.cairo index 6ad4be06e..ee47fe173 100644 --- a/src/operators/tensor/math/sqrt/sqrt_i32/fp8x23.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_i32/fp8x23.cairo @@ -10,25 +10,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; -fn sqrt(self: @Tensor) -> Tensor { +fn sqrt(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sqrt(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sqrt(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).sqrt()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } \ No newline at end of file diff --git a/src/operators/tensor/math/sqrt/sqrt_i8/core.cairo b/src/operators/tensor/math/sqrt/sqrt_i8/core.cairo index 1d8805c01..55a7f4141 100644 --- a/src/operators/tensor/math/sqrt/sqrt_i8/core.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_i8/core.cairo @@ -8,11 +8,11 @@ fn sqrt_i8(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sqrt(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sqrt(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sqrt(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sqrt(*self)), }, - Option::None(_) => Option::Some(fp16x16::sqrt(self)), + Option::None(_) => Option::Some(fp16x16::sqrt(*self)), }, - Option::None(_) => Option::Some(fp16x16::sqrt(self)), + Option::None(_) => Option::Some(fp16x16::sqrt(*self)), } } \ No newline at end of file diff --git a/src/operators/tensor/math/sqrt/sqrt_i8/fp16x16.cairo b/src/operators/tensor/math/sqrt/sqrt_i8/fp16x16.cairo index b1e7a6086..0d69b27b9 100644 --- a/src/operators/tensor/math/sqrt/sqrt_i8/fp16x16.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_i8/fp16x16.cairo @@ -10,25 +10,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; -fn sqrt(self: @Tensor) -> Tensor { +fn sqrt(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sqrt(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sqrt(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).sqrt()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } \ No newline at end of file diff --git a/src/operators/tensor/math/sqrt/sqrt_i8/fp8x23.cairo b/src/operators/tensor/math/sqrt/sqrt_i8/fp8x23.cairo index b83309dec..c37f548f9 100644 --- a/src/operators/tensor/math/sqrt/sqrt_i8/fp8x23.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_i8/fp8x23.cairo @@ -10,25 +10,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; -fn sqrt(self: @Tensor) -> Tensor { +fn sqrt(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sqrt(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::sqrt(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).sqrt()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/sqrt/sqrt_u32/core.cairo b/src/operators/tensor/math/sqrt/sqrt_u32/core.cairo index 67f8f5066..619ec3459 100644 --- a/src/operators/tensor/math/sqrt/sqrt_u32/core.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_u32/core.cairo @@ -8,11 +8,11 @@ fn sqrt_u32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sqrt(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sqrt(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sqrt(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sqrt(*self)), }, - Option::None(_) => Option::Some(fp16x16::sqrt(self)), + Option::None(_) => Option::Some(fp16x16::sqrt(*self)), }, - Option::None(_) => Option::Some(fp16x16::sqrt(self)), + Option::None(_) => Option::Some(fp16x16::sqrt(*self)), } -} \ No newline at end of file +} diff --git a/src/operators/tensor/math/sqrt/sqrt_u32/fp16x16.cairo b/src/operators/tensor/math/sqrt/sqrt_u32/fp16x16.cairo index 88513a1ac..8ab07e5ad 100644 --- a/src/operators/tensor/math/sqrt/sqrt_u32/fp16x16.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_u32/fp16x16.cairo @@ -10,19 +10,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; -fn sqrt(self: @Tensor) -> Tensor { +fn sqrt(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::sqrt(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).sqrt()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/sqrt/sqrt_u32/fp8x23.cairo b/src/operators/tensor/math/sqrt/sqrt_u32/fp8x23.cairo index da54c827e..f37773768 100644 --- a/src/operators/tensor/math/sqrt/sqrt_u32/fp8x23.cairo +++ b/src/operators/tensor/math/sqrt/sqrt_u32/fp8x23.cairo @@ -10,19 +10,19 @@ use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; -fn sqrt(self: @Tensor) -> Tensor { +fn sqrt(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::sqrt(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).sqrt()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } From cf4e8c9157e645756c694b35a37f6c167545cb5a Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 09:43:33 +0300 Subject: [PATCH 18/30] refactor tanh --- .../tensor/math/tanh/tanh_fp/core.cairo | 8 +++--- .../tensor/math/tanh/tanh_fp/fp16x16.cairo | 18 +++++++------ .../tensor/math/tanh/tanh_fp/fp8x23.cairo | 19 +++++++------- .../tensor/math/tanh/tanh_i32/core.cairo | 8 +++--- .../tensor/math/tanh/tanh_i32/fp16x16.cairo | 26 +++++++------------ .../tensor/math/tanh/tanh_i32/fp8x23.cairo | 24 +++++++---------- .../tensor/math/tanh/tanh_i8/core.cairo | 8 +++--- .../tensor/math/tanh/tanh_i8/fp16x16.cairo | 24 +++++++---------- .../tensor/math/tanh/tanh_i8/fp8x23.cairo | 24 +++++++---------- .../tensor/math/tanh/tanh_u32/core.cairo | 8 +++--- .../tensor/math/tanh/tanh_u32/fp16x16.cairo | 18 ++++++------- .../tensor/math/tanh/tanh_u32/fp8x23.cairo | 18 ++++++------- 12 files changed, 91 insertions(+), 112 deletions(-) diff --git a/src/operators/tensor/math/tanh/tanh_fp/core.cairo b/src/operators/tensor/math/tanh/tanh_fp/core.cairo index 45f5a1993..d3cef366b 100644 --- a/src/operators/tensor/math/tanh/tanh_fp/core.cairo +++ b/src/operators/tensor/math/tanh/tanh_fp/core.cairo @@ -8,11 +8,11 @@ fn tanh(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::tanh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::tanh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::tanh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::tanh(*self)), }, - Option::None(_) => Option::Some(fp16x16::tanh(self)), + Option::None(_) => Option::Some(fp16x16::tanh(*self)), }, - Option::None(_) => Option::Some(fp16x16::tanh(self)), + Option::None(_) => Option::Some(fp16x16::tanh(*self)), } } diff --git a/src/operators/tensor/math/tanh/tanh_fp/fp16x16.cairo b/src/operators/tensor/math/tanh/tanh_fp/fp16x16.cairo index 61684ca87..443d726f7 100644 --- a/src/operators/tensor/math/tanh/tanh_fp/fp16x16.cairo +++ b/src/operators/tensor/math/tanh/tanh_fp/fp16x16.cairo @@ -9,18 +9,20 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::tanh docstring -fn tanh(self: @Tensor) -> Tensor { +fn tanh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::tanh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).tanh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } + diff --git a/src/operators/tensor/math/tanh/tanh_fp/fp8x23.cairo b/src/operators/tensor/math/tanh/tanh_fp/fp8x23.cairo index 3c053f59f..0ed37fe99 100644 --- a/src/operators/tensor/math/tanh/tanh_fp/fp8x23.cairo +++ b/src/operators/tensor/math/tanh/tanh_fp/fp8x23.cairo @@ -9,18 +9,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::tanh docstring -fn tanh(self: @Tensor) -> Tensor { +fn tanh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - result.append(FixedTrait::tanh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append((*item).tanh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); -} + return TensorTrait::::new(self.shape, result.span(), self.extra); +} \ No newline at end of file diff --git a/src/operators/tensor/math/tanh/tanh_i32/core.cairo b/src/operators/tensor/math/tanh/tanh_i32/core.cairo index 271bbbd7b..600c99386 100644 --- a/src/operators/tensor/math/tanh/tanh_i32/core.cairo +++ b/src/operators/tensor/math/tanh/tanh_i32/core.cairo @@ -9,11 +9,11 @@ fn tanh_i32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::tanh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::tanh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::tanh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::tanh(*self)), }, - Option::None(_) => Option::Some(fp16x16::tanh(self)), + Option::None(_) => Option::Some(fp16x16::tanh(*self)), }, - Option::None(_) => Option::Some(fp16x16::tanh(self)), + Option::None(_) => Option::Some(fp16x16::tanh(*self)), } } diff --git a/src/operators/tensor/math/tanh/tanh_i32/fp16x16.cairo b/src/operators/tensor/math/tanh/tanh_i32/fp16x16.cairo index 4fa13aabe..cc6b03b40 100644 --- a/src/operators/tensor/math/tanh/tanh_i32/fp16x16.cairo +++ b/src/operators/tensor/math/tanh/tanh_i32/fp16x16.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::tanh docstring -fn tanh(self: @Tensor) -> Tensor { +fn tanh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::tanh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::tanh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).tanh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); -} + return TensorTrait::::new(self.shape, result.span(), self.extra); +} \ No newline at end of file diff --git a/src/operators/tensor/math/tanh/tanh_i32/fp8x23.cairo b/src/operators/tensor/math/tanh/tanh_i32/fp8x23.cairo index 27775f0f9..a2891fa74 100644 --- a/src/operators/tensor/math/tanh/tanh_i32/fp8x23.cairo +++ b/src/operators/tensor/math/tanh/tanh_i32/fp8x23.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::tanh docstring -fn tanh(self: @Tensor) -> Tensor { +fn tanh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::tanh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::tanh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item.mag, *item.sign).tanh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/tanh/tanh_i8/core.cairo b/src/operators/tensor/math/tanh/tanh_i8/core.cairo index cf6f96834..b4ce797ff 100644 --- a/src/operators/tensor/math/tanh/tanh_i8/core.cairo +++ b/src/operators/tensor/math/tanh/tanh_i8/core.cairo @@ -9,11 +9,11 @@ fn tanh_i8(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::tanh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::tanh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::tanh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::tanh(*self)), }, - Option::None(_) => Option::Some(fp16x16::tanh(self)), + Option::None(_) => Option::Some(fp16x16::tanh(*self)), }, - Option::None(_) => Option::Some(fp16x16::tanh(self)), + Option::None(_) => Option::Some(fp16x16::tanh(*self)), } } diff --git a/src/operators/tensor/math/tanh/tanh_i8/fp16x16.cairo b/src/operators/tensor/math/tanh/tanh_i8/fp16x16.cairo index ffce379b8..668de1f85 100644 --- a/src/operators/tensor/math/tanh/tanh_i8/fp16x16.cairo +++ b/src/operators/tensor/math/tanh/tanh_i8/fp16x16.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::tanh docstring -fn tanh(self: @Tensor) -> Tensor { +fn tanh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::tanh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::tanh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).tanh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/tanh/tanh_i8/fp8x23.cairo b/src/operators/tensor/math/tanh/tanh_i8/fp8x23.cairo index 8aa7487cf..f60adb4e7 100644 --- a/src/operators/tensor/math/tanh/tanh_i8/fp8x23.cairo +++ b/src/operators/tensor/math/tanh/tanh_i8/fp8x23.cairo @@ -11,25 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::tanh docstring -fn tanh(self: @Tensor) -> Tensor { +fn tanh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = *data.pop_front().unwrap(); - - if ele.sign == true { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::tanh(ele)) - } else { - let ele = FixedTrait::new_unscaled(ele.mag.into(), ele.sign); - result.append(FixedTrait::tanh(ele)) - } - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled((*item.mag).into(), *item.sign).tanh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/tanh/tanh_u32/core.cairo b/src/operators/tensor/math/tanh/tanh_u32/core.cairo index dc5e39c0c..03b0655f1 100644 --- a/src/operators/tensor/math/tanh/tanh_u32/core.cairo +++ b/src/operators/tensor/math/tanh/tanh_u32/core.cairo @@ -9,11 +9,11 @@ fn tanh_u32(self: @Tensor) -> Option> { match *self.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::tanh(self)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::tanh(self)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::tanh(*self)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::tanh(*self)), }, - Option::None(_) => Option::Some(fp16x16::tanh(self)), + Option::None(_) => Option::Some(fp16x16::tanh(*self)), }, - Option::None(_) => Option::Some(fp16x16::tanh(self)), + Option::None(_) => Option::Some(fp16x16::tanh(*self)), } } diff --git a/src/operators/tensor/math/tanh/tanh_u32/fp16x16.cairo b/src/operators/tensor/math/tanh/tanh_u32/fp16x16.cairo index 780401a30..500ab9db8 100644 --- a/src/operators/tensor/math/tanh/tanh_u32/fp16x16.cairo +++ b/src/operators/tensor/math/tanh/tanh_u32/fp16x16.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16Impl; /// Cf: TensorTrait::tanh docstring -fn tanh(self: @Tensor) -> Tensor { +fn tanh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::tanh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).tanh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } diff --git a/src/operators/tensor/math/tanh/tanh_u32/fp8x23.cairo b/src/operators/tensor/math/tanh/tanh_u32/fp8x23.cairo index 9d356af6e..cfd29bfe4 100644 --- a/src/operators/tensor/math/tanh/tanh_u32/fp8x23.cairo +++ b/src/operators/tensor/math/tanh/tanh_u32/fp8x23.cairo @@ -11,19 +11,19 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23Impl; /// Cf: TensorTrait::tanh docstring -fn tanh(self: @Tensor) -> Tensor { +fn tanh(mut self: Tensor) -> Tensor { let mut result = ArrayTrait::new(); - let mut data = *self.data; loop { - let ele = FixedTrait::new_unscaled((*data.pop_front().unwrap()), false); - - result.append(FixedTrait::tanh(ele)); - - if (data.len() == 0) { - break (); + match self.data.pop_front() { + Option::Some(item) => { + result.append(FixedTrait::new_unscaled(*item, false).tanh()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*self.shape, result.span(), *self.extra); + return TensorTrait::::new(self.shape, result.span(), self.extra); } From ddf67873e5a19e8f13b719a6d63226db85dff1dd Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 09:48:48 +0300 Subject: [PATCH 19/30] refactor leaky_relu --- .../leaky_relu/leaky_relu_fp/core.cairo | 8 ++--- .../leaky_relu/leaky_relu_fp/fp16x16.cairo | 28 ++++++++--------- .../leaky_relu/leaky_relu_fp/fp8x23.cairo | 28 ++++++++--------- .../leaky_relu/leaky_relu_i32/core.cairo | 8 ++--- .../leaky_relu/leaky_relu_i32/fp16x16.cairo | 30 +++++++++---------- .../leaky_relu/leaky_relu_i32/fp8x23.cairo | 30 +++++++++---------- .../leaky_relu/leaky_relu_i8/core.cairo | 8 ++--- .../leaky_relu/leaky_relu_i8/fp16x16.cairo | 30 +++++++++---------- .../leaky_relu/leaky_relu_i8/fp8x23.cairo | 30 +++++++++---------- 9 files changed, 100 insertions(+), 100 deletions(-) diff --git a/src/operators/nn/functional/leaky_relu/leaky_relu_fp/core.cairo b/src/operators/nn/functional/leaky_relu/leaky_relu_fp/core.cairo index 5645c43b8..70b3b1dfb 100644 --- a/src/operators/nn/functional/leaky_relu/leaky_relu_fp/core.cairo +++ b/src/operators/nn/functional/leaky_relu/leaky_relu_fp/core.cairo @@ -7,12 +7,12 @@ fn leaky_relu_fp(z: @Tensor, alpha: @FixedType, ) -> Option:: match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::leaky_relu(z, alpha)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::leaky_relu(z, alpha)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::leaky_relu(*z, alpha)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::leaky_relu(*z, alpha)), }, - Option::None(_) => Option::Some((fp16x16::leaky_relu(z, alpha))), + Option::None(_) => Option::Some((fp16x16::leaky_relu(*z, alpha))), }, - Option::None(_) => Option::Some((fp16x16::leaky_relu(z, alpha))), + Option::None(_) => Option::Some((fp16x16::leaky_relu(*z, alpha))), } } diff --git a/src/operators/nn/functional/leaky_relu/leaky_relu_fp/fp16x16.cairo b/src/operators/nn/functional/leaky_relu/leaky_relu_fp/fp16x16.cairo index a0ec28b35..4fa4941a8 100644 --- a/src/operators/nn/functional/leaky_relu/leaky_relu_fp/fp16x16.cairo +++ b/src/operators/nn/functional/leaky_relu/leaky_relu_fp/fp16x16.cairo @@ -12,25 +12,25 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: NNTrait::leaky_relu docstring -fn leaky_relu(z: @Tensor, alpha: @FixedType) -> Tensor { +fn leaky_relu(mut z: Tensor, alpha: @FixedType) -> Tensor { assert(*alpha.mag < ONE, 'alpha must be less than 1_fp'); let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - loop { - if data.len() == 0 { - break (); - }; - - let current_index = *data.pop_front().unwrap(); - if (current_index >= FixedType { mag: 0, sign: false }) { - data_result.append(current_index); - } else { - data_result.append(current_index * *alpha); + loop { + match z.data.pop_front() { + Option::Some(item) => { + if (*item >= FixedType { mag: 0, sign: false }) { + data_result.append(*item); + } else { + data_result.append(*item * *alpha); + }; + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::new(z.shape, data_result.span(), z.extra); } - diff --git a/src/operators/nn/functional/leaky_relu/leaky_relu_fp/fp8x23.cairo b/src/operators/nn/functional/leaky_relu/leaky_relu_fp/fp8x23.cairo index 84b6cc34b..8bc99f415 100644 --- a/src/operators/nn/functional/leaky_relu/leaky_relu_fp/fp8x23.cairo +++ b/src/operators/nn/functional/leaky_relu/leaky_relu_fp/fp8x23.cairo @@ -12,25 +12,25 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: NNTrait::leaky_relu docstring -fn leaky_relu(z: @Tensor, alpha: @FixedType) -> Tensor { +fn leaky_relu(mut z: Tensor, alpha: @FixedType) -> Tensor { assert(*alpha.mag < ONE, 'alpha must be less than 1_fp'); let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - loop { - if data.len() == 0 { - break (); - }; - - let current_index = *data.pop_front().unwrap(); - if (current_index >= FixedType { mag: 0, sign: false }) { - data_result.append(current_index); - } else { - data_result.append(current_index * *alpha); + loop { + match z.data.pop_front() { + Option::Some(item) => { + if (*item >= FixedType { mag: 0, sign: false }) { + data_result.append(*item); + } else { + data_result.append(*item * *alpha); + }; + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::new(z.shape, data_result.span(), z.extra); } - diff --git a/src/operators/nn/functional/leaky_relu/leaky_relu_i32/core.cairo b/src/operators/nn/functional/leaky_relu/leaky_relu_i32/core.cairo index 3209a9b0d..8b0148183 100644 --- a/src/operators/nn/functional/leaky_relu/leaky_relu_i32/core.cairo +++ b/src/operators/nn/functional/leaky_relu/leaky_relu_i32/core.cairo @@ -9,12 +9,12 @@ fn leaky_relu_i32(z: @Tensor, alpha: @FixedType, ) -> Option:: match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::leaky_relu(z, alpha)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::leaky_relu(z, alpha)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::leaky_relu(*z, alpha)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::leaky_relu(*z, alpha)), }, - Option::None(_) => Option::Some((fp16x16::leaky_relu(z, alpha))), + Option::None(_) => Option::Some((fp16x16::leaky_relu(*z, alpha))), }, - Option::None(_) => Option::Some((fp16x16::leaky_relu(z, alpha))), + Option::None(_) => Option::Some((fp16x16::leaky_relu(*z, alpha))), } } diff --git a/src/operators/nn/functional/leaky_relu/leaky_relu_i32/fp16x16.cairo b/src/operators/nn/functional/leaky_relu/leaky_relu_i32/fp16x16.cairo index f7237e993..d467a0d66 100644 --- a/src/operators/nn/functional/leaky_relu/leaky_relu_i32/fp16x16.cairo +++ b/src/operators/nn/functional/leaky_relu/leaky_relu_i32/fp16x16.cairo @@ -12,27 +12,27 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: NNTrait::leaky_relu docstring -fn leaky_relu(z: @Tensor, alpha: @FixedType) -> Tensor { +fn leaky_relu(mut z: Tensor, alpha: @FixedType) -> Tensor { assert(*alpha.mag < ONE, 'alpha must be less than 1_fp'); let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - loop { - if data.len() == 0 { - break (); - }; - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - if (current_index >= i32 { mag: 0, sign: false }) { - data_result.append(fp_current_index); - } else { - data_result.append(fp_current_index * *alpha); + loop { + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_index = FixedTrait::new_unscaled((*item.mag).into(), (*item).sign); + if (*item >= i32 { mag: 0, sign: false }) { + data_result.append(fp_current_index); + } else { + data_result.append(fp_current_index * *alpha); + }; + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/leaky_relu/leaky_relu_i32/fp8x23.cairo b/src/operators/nn/functional/leaky_relu/leaky_relu_i32/fp8x23.cairo index 65fd0dbbe..97ac7ad82 100644 --- a/src/operators/nn/functional/leaky_relu/leaky_relu_i32/fp8x23.cairo +++ b/src/operators/nn/functional/leaky_relu/leaky_relu_i32/fp8x23.cairo @@ -11,27 +11,27 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: NNTrait::leaky_relu docstring -fn leaky_relu(z: @Tensor, alpha: @FixedType) -> Tensor { +fn leaky_relu(mut z: Tensor, alpha: @FixedType) -> Tensor { assert(*alpha.mag < ONE, 'alpha must be less than 1_fp'); let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - loop { - if data.len() == 0 { - break (); - }; - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - if (current_index >= i32 { mag: 0, sign: false }) { - data_result.append(fp_current_index); - } else { - data_result.append(fp_current_index * *alpha); + loop { + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_index = FixedTrait::new_unscaled((*item.mag).into(), (*item).sign); + if (*item >= i32 { mag: 0, sign: false }) { + data_result.append(fp_current_index); + } else { + data_result.append(fp_current_index * *alpha); + }; + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/leaky_relu/leaky_relu_i8/core.cairo b/src/operators/nn/functional/leaky_relu/leaky_relu_i8/core.cairo index 3a8e0ccfc..55a8b5ea9 100644 --- a/src/operators/nn/functional/leaky_relu/leaky_relu_i8/core.cairo +++ b/src/operators/nn/functional/leaky_relu/leaky_relu_i8/core.cairo @@ -9,12 +9,12 @@ fn leaky_relu_i8(z: @Tensor, alpha: @FixedType) -> Option:: match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::leaky_relu(z, alpha)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::leaky_relu(z, alpha)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::leaky_relu(*z, alpha)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::leaky_relu(*z, alpha)), }, - Option::None(_) => Option::Some((fp16x16::leaky_relu(z, alpha))), + Option::None(_) => Option::Some((fp16x16::leaky_relu(*z, alpha))), }, - Option::None(_) => Option::Some((fp16x16::leaky_relu(z, alpha))), + Option::None(_) => Option::Some((fp16x16::leaky_relu(*z, alpha))), } } diff --git a/src/operators/nn/functional/leaky_relu/leaky_relu_i8/fp16x16.cairo b/src/operators/nn/functional/leaky_relu/leaky_relu_i8/fp16x16.cairo index 5454ce845..dd81d8af8 100644 --- a/src/operators/nn/functional/leaky_relu/leaky_relu_i8/fp16x16.cairo +++ b/src/operators/nn/functional/leaky_relu/leaky_relu_i8/fp16x16.cairo @@ -12,27 +12,27 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: NNTrait::leaky_relu docstring -fn leaky_relu(z: @Tensor, alpha: @FixedType) -> Tensor { +fn leaky_relu(mut z: Tensor, alpha: @FixedType) -> Tensor { assert(*alpha.mag < ONE, 'alpha must be less than 1_fp'); let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - loop { - if data.len() == 0 { - break (); - }; - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - if (current_index >= i8 { mag: 0, sign: false }) { - data_result.append(fp_current_index); - } else { - data_result.append(fp_current_index * *alpha); + loop { + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_index = FixedTrait::new_unscaled((*item.mag).into(), (*item).sign); + if (*item >= i8 { mag: 0, sign: false }) { + data_result.append(fp_current_index); + } else { + data_result.append(fp_current_index * *alpha); + }; + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/leaky_relu/leaky_relu_i8/fp8x23.cairo b/src/operators/nn/functional/leaky_relu/leaky_relu_i8/fp8x23.cairo index dd4aeeedf..f36b68542 100644 --- a/src/operators/nn/functional/leaky_relu/leaky_relu_i8/fp8x23.cairo +++ b/src/operators/nn/functional/leaky_relu/leaky_relu_i8/fp8x23.cairo @@ -11,27 +11,27 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; /// Cf: NNTrait::leaky_relu docstring -fn leaky_relu(z: @Tensor, alpha: @FixedType) -> Tensor { +fn leaky_relu(mut z: Tensor, alpha: @FixedType) -> Tensor { assert(*alpha.mag < ONE, 'alpha must be less than 1_fp'); let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - loop { - if data.len() == 0 { - break (); - }; - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - if (current_index >= i8 { mag: 0, sign: false }) { - data_result.append(fp_current_index); - } else { - data_result.append(fp_current_index * *alpha); + loop { + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_index = FixedTrait::new_unscaled((*item.mag).into(), (*item).sign); + if (*item >= i8 { mag: 0, sign: false }) { + data_result.append(fp_current_index); + } else { + data_result.append(fp_current_index * *alpha); + }; + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::new(z.shape, data_result.span(), z.extra); } From ef1c133edf6e5d9d695b9105fc4a1b664bc0ad02 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 09:53:34 +0300 Subject: [PATCH 20/30] refactor relu --- .../nn/functional/relu/relu_fp/core.cairo | 8 +++--- .../nn/functional/relu/relu_fp/fp16x16.cairo | 26 ++++++++++--------- .../nn/functional/relu/relu_fp/fp8x23.cairo | 25 +++++++++--------- .../nn/functional/relu/relu_i32.cairo | 25 +++++++++--------- .../nn/functional/relu/relu_i8.cairo | 26 ++++++++++--------- .../nn/implementations/impl_nn_i32.cairo | 2 +- .../nn/implementations/impl_nn_i8.cairo | 2 +- 7 files changed, 60 insertions(+), 54 deletions(-) diff --git a/src/operators/nn/functional/relu/relu_fp/core.cairo b/src/operators/nn/functional/relu/relu_fp/core.cairo index f2038c5a3..7dc3b9577 100644 --- a/src/operators/nn/functional/relu/relu_fp/core.cairo +++ b/src/operators/nn/functional/relu/relu_fp/core.cairo @@ -7,12 +7,12 @@ fn relu_fp(z: @Tensor) -> Option::> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::relu(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::relu(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::relu(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::relu(*z)), }, - Option::None(_) => Option::Some((fp16x16::relu(z))), + Option::None(_) => Option::Some((fp16x16::relu(*z))), }, - Option::None(_) => Option::Some((fp16x16::relu(z))), + Option::None(_) => Option::Some((fp16x16::relu(*z))), } } diff --git a/src/operators/nn/functional/relu/relu_fp/fp16x16.cairo b/src/operators/nn/functional/relu/relu_fp/fp16x16.cairo index 0471a027e..b07d10af7 100644 --- a/src/operators/nn/functional/relu/relu_fp/fp16x16.cairo +++ b/src/operators/nn/functional/relu/relu_fp/fp16x16.cairo @@ -8,22 +8,24 @@ use orion::numbers::fixed_point::core::FixedType; use orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16PartialOrd; /// Cf: NNTrait::relu docstring -fn relu(z: @Tensor) -> Tensor { +fn relu(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; loop { - if data.len() == 0 { - break (); - }; - - let current_index = *data.pop_front().unwrap(); - if (current_index < FixedType { mag: 0, sign: false }) { - data_result.append(FixedType { mag: 0, sign: false }); - } else { - data_result.append(current_index); + match z.data.pop_front() { + Option::Some(item) => { + if (*item < FixedType { mag: 0, sign: false }) { + data_result.append(FixedType { mag: 0, sign: false }); + } else { + data_result.append(*item); + }; + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } + diff --git a/src/operators/nn/functional/relu/relu_fp/fp8x23.cairo b/src/operators/nn/functional/relu/relu_fp/fp8x23.cairo index fcfd1c975..83158c661 100644 --- a/src/operators/nn/functional/relu/relu_fp/fp8x23.cairo +++ b/src/operators/nn/functional/relu/relu_fp/fp8x23.cairo @@ -8,22 +8,23 @@ use orion::numbers::fixed_point::core::FixedType; use orion::numbers::fixed_point::implementations::fp8x23::core::FP8x23PartialOrd; /// Cf: NNTrait::relu docstring -fn relu(z: @Tensor) -> Tensor { +fn relu(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; loop { - if data.len() == 0 { - break (); - }; - - let current_index = *data.pop_front().unwrap(); - if (current_index < FixedType { mag: 0, sign: false }) { - data_result.append(FixedType { mag: 0, sign: false }); - } else { - data_result.append(current_index); + match z.data.pop_front() { + Option::Some(item) => { + if (*item < FixedType { mag: 0, sign: false }) { + data_result.append(FixedType { mag: 0, sign: false }); + } else { + data_result.append(*item); + }; + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/relu/relu_i32.cairo b/src/operators/nn/functional/relu/relu_i32.cairo index 168691115..c0bfefc3f 100644 --- a/src/operators/nn/functional/relu/relu_i32.cairo +++ b/src/operators/nn/functional/relu/relu_i32.cairo @@ -8,22 +8,23 @@ use orion::operators::tensor::implementations::impl_tensor_i32::Tensor_i32; /// Cf: NNTrait::relu docstring -fn relu_i32(z: @Tensor) -> Tensor { +fn relu_i32(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; loop { - if data.len() == 0 { - break (); - }; - - let current_index = *data.pop_front().unwrap(); - if current_index < IntegerTrait::new(0, false) { - data_result.append(IntegerTrait::new(0, false)); - } else { - data_result.append(current_index); + match z.data.pop_front() { + Option::Some(item) => { + if (*item) < IntegerTrait::new(0, false) { + data_result.append(IntegerTrait::new(0, false)); + } else { + data_result.append(*item); + }; + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/relu/relu_i8.cairo b/src/operators/nn/functional/relu/relu_i8.cairo index e13f65366..c4de1a447 100644 --- a/src/operators/nn/functional/relu/relu_i8.cairo +++ b/src/operators/nn/functional/relu/relu_i8.cairo @@ -8,22 +8,24 @@ use orion::operators::tensor::implementations::impl_tensor_i8::Tensor_i8; /// Cf: NNTrait::relu docstring -fn relu_i8(z: @Tensor) -> Tensor { +fn relu_i8(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; loop { - if data.len() == 0 { - break (); - }; - - let current_index = *data.pop_front().unwrap(); - if current_index < IntegerTrait::new(0, false) { - data_result.append(IntegerTrait::new(0, false)); - } else { - data_result.append(current_index); + match z.data.pop_front() { + Option::Some(item) => { + if (*item) < IntegerTrait::new(0, false) { + data_result.append(IntegerTrait::new(0, false)); + } else { + data_result.append(*item); + }; + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } + diff --git a/src/operators/nn/implementations/impl_nn_i32.cairo b/src/operators/nn/implementations/impl_nn_i32.cairo index ad470b3ac..e1e046592 100644 --- a/src/operators/nn/implementations/impl_nn_i32.cairo +++ b/src/operators/nn/implementations/impl_nn_i32.cairo @@ -15,7 +15,7 @@ use orion::numbers::fixed_point::core::{FixedType}; impl NN_i32 of NNTrait { fn relu(tensor: @Tensor) -> Tensor { - relu_i32(tensor) + relu_i32(*tensor) } fn sigmoid(tensor: @Tensor) -> Tensor { diff --git a/src/operators/nn/implementations/impl_nn_i8.cairo b/src/operators/nn/implementations/impl_nn_i8.cairo index 4cf78f43c..3c9deaa8d 100644 --- a/src/operators/nn/implementations/impl_nn_i8.cairo +++ b/src/operators/nn/implementations/impl_nn_i8.cairo @@ -15,7 +15,7 @@ use orion::numbers::fixed_point::core::{FixedType}; impl NN_i8 of NNTrait { fn relu(tensor: @Tensor) -> Tensor { - relu_i8(tensor) + relu_i8(*tensor) } fn sigmoid(tensor: @Tensor) -> Tensor { From 7551ecd2fe49969cf2814cad0b8c683e599f686e Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 10:22:17 +0300 Subject: [PATCH 21/30] refactor sigmoid --- .../functional/sigmoid/sigmoid_fp/core.cairo | 8 ++-- .../sigmoid/sigmoid_fp/fp16x16.cairo | 23 +++++++----- .../sigmoid/sigmoid_fp/fp8x23.cairo | 24 ++++++------ .../functional/sigmoid/sigmoid_i32/core.cairo | 8 ++-- .../sigmoid/sigmoid_i32/fp16x16.cairo | 33 ++++++++++------- .../sigmoid/sigmoid_i32/fp8x23.cairo | 30 ++++++++------- .../functional/sigmoid/sigmoid_i8/core.cairo | 8 ++-- .../sigmoid/sigmoid_i8/fp16x16.cairo | 29 ++++++++------- .../sigmoid/sigmoid_i8/fp8x23.cairo | 30 ++++++++------- .../functional/sigmoid/sigmoid_u32/core.cairo | 8 ++-- .../sigmoid/sigmoid_u32/fp16x16.cairo | 37 ++++++++++--------- .../sigmoid/sigmoid_u32/fp8x23.cairo | 33 +++++++++-------- 12 files changed, 146 insertions(+), 125 deletions(-) diff --git a/src/operators/nn/functional/sigmoid/sigmoid_fp/core.cairo b/src/operators/nn/functional/sigmoid/sigmoid_fp/core.cairo index f8cf68bb5..1a354bdc4 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_fp/core.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_fp/core.cairo @@ -8,12 +8,12 @@ fn sigmoid_fp(z: @Tensor) -> Option> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sigmoid_fp(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sigmoid_fp(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sigmoid_fp(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sigmoid_fp(*z)), }, - Option::None(_) => Option::Some(fp16x16::sigmoid_fp(z)), + Option::None(_) => Option::Some(fp16x16::sigmoid_fp(*z)), }, - Option::None(_) => Option::Some(fp16x16::sigmoid_fp(z)), + Option::None(_) => Option::Some(fp16x16::sigmoid_fp(*z)), } } diff --git a/src/operators/nn/functional/sigmoid/sigmoid_fp/fp16x16.cairo b/src/operators/nn/functional/sigmoid/sigmoid_fp/fp16x16.cairo index 066755ab0..a2d67638d 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_fp/fp16x16.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_fp/fp16x16.cairo @@ -12,19 +12,22 @@ use orion::numbers::fixed_point::core::{FixedType, FixedTrait}; /// Cf: NNTrait::sigmoid docstring -fn sigmoid_fp(z: @Tensor) -> Tensor { +fn sigmoid_fp(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let result = FixedTrait::ONE() + / (FixedTrait::ONE() + (*item * FixedType { mag: 65536, sign: true }).exp()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let result = fp_one - / (fp_one + (*data.pop_front().unwrap() * FixedType { mag: 65536, sign: true }).exp()); - data_result.append(result); }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/sigmoid/sigmoid_fp/fp8x23.cairo b/src/operators/nn/functional/sigmoid/sigmoid_fp/fp8x23.cairo index cdd559740..2c73eeec7 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_fp/fp8x23.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_fp/fp8x23.cairo @@ -12,20 +12,22 @@ use orion::numbers::fixed_point::core::{FixedType, FixedTrait}; /// Cf: NNTrait::sigmoid docstring -fn sigmoid_fp(z: @Tensor) -> Tensor { +fn sigmoid_fp(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let result = FixedTrait::ONE() + / (FixedTrait::ONE() + (*item * FixedType { mag: 8388608, sign: true }).exp()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let result = fp_one - / (fp_one - + (*data.pop_front().unwrap() * FixedType { mag: 8388608, sign: true }).exp()); - data_result.append(result); }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/sigmoid/sigmoid_i32/core.cairo b/src/operators/nn/functional/sigmoid/sigmoid_i32/core.cairo index e80a8d439..8c5cb405c 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_i32/core.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_i32/core.cairo @@ -9,12 +9,12 @@ fn sigmoid_i32(z: @Tensor) -> Option> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sigmoid_i32(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sigmoid_i32(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sigmoid_i32(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sigmoid_i32(*z)), }, - Option::None(_) => Option::Some(fp16x16::sigmoid_i32(z)), + Option::None(_) => Option::Some(fp16x16::sigmoid_i32(*z)), }, - Option::None(_) => Option::Some(fp16x16::sigmoid_i32(z)), + Option::None(_) => Option::Some(fp16x16::sigmoid_i32(*z)), } } diff --git a/src/operators/nn/functional/sigmoid/sigmoid_i32/fp16x16.cairo b/src/operators/nn/functional/sigmoid/sigmoid_i32/fp16x16.cairo index 6dfa17477..fdc720686 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_i32/fp16x16.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_i32/fp16x16.cairo @@ -5,29 +5,34 @@ use array::SpanTrait; use option::OptionTrait; use orion::numbers::signed_integer::{integer_trait::IntegerTrait, i32::i32}; -use orion::numbers::fixed_point::implementations::fp16x16::core::{FP16x16Impl, FP16x16Add, FP16x16Div}; +use orion::numbers::fixed_point::implementations::fp16x16::core::{ + FP16x16Impl, FP16x16Add, FP16x16Div +}; use orion::operators::tensor::core::{Tensor, TensorTrait}; use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::core::{FixedType, FixedTrait}; /// Cf: NNTrait::sigmoid docstring -fn sigmoid_i32(z: @Tensor) -> Tensor { +fn sigmoid_i32(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let current_item = *item * IntegerTrait::new(1, true); + let fp_current_index = FixedTrait::new_unscaled( + current_item.mag.into(), current_item.sign + ); + let result = FixedTrait::ONE() / (FixedTrait::ONE() + fp_current_index.exp()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap() * IntegerTrait::new(1, true); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = fp_one / (fp_one + fp_current_index.exp()); - data_result.append(result); }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/sigmoid/sigmoid_i32/fp8x23.cairo b/src/operators/nn/functional/sigmoid/sigmoid_i32/fp8x23.cairo index 0860df578..01aa8d1f0 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_i32/fp8x23.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_i32/fp8x23.cairo @@ -9,24 +9,26 @@ use orion::operators::tensor::core::{Tensor, TensorTrait}; use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::core::{FixedType, FixedTrait}; - /// Cf: NNTrait::sigmoid docstring -fn sigmoid_i32(z: @Tensor) -> Tensor { +fn sigmoid_i32(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let current_item = *item * IntegerTrait::new(1, true); + let fp_current_index = FixedTrait::new_unscaled( + current_item.mag.into(), current_item.sign + ); + let result = FixedTrait::ONE() / (FixedTrait::ONE() + fp_current_index.exp()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap() * IntegerTrait::new(1, true); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = fp_one / (fp_one + fp_current_index.exp()); - data_result.append(result); }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/sigmoid/sigmoid_i8/core.cairo b/src/operators/nn/functional/sigmoid/sigmoid_i8/core.cairo index 5b371cc73..6120b1095 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_i8/core.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_i8/core.cairo @@ -9,12 +9,12 @@ fn sigmoid_i8(z: @Tensor) -> Option> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sigmoid_i8(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sigmoid_i8(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sigmoid_i8(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sigmoid_i8(*z)), }, - Option::None(_) => Option::Some(fp16x16::sigmoid_i8(z)), + Option::None(_) => Option::Some(fp16x16::sigmoid_i8(*z)), }, - Option::None(_) => Option::Some(fp16x16::sigmoid_i8(z)), + Option::None(_) => Option::Some(fp16x16::sigmoid_i8(*z)), } } diff --git a/src/operators/nn/functional/sigmoid/sigmoid_i8/fp16x16.cairo b/src/operators/nn/functional/sigmoid/sigmoid_i8/fp16x16.cairo index f9223e8f9..b3a5dbc06 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_i8/fp16x16.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_i8/fp16x16.cairo @@ -12,22 +12,25 @@ use orion::numbers::fixed_point::core::{FixedType, FixedTrait}; /// Cf: NNTrait::sigmoid docstring -fn sigmoid_i8(z: @Tensor) -> Tensor { +fn sigmoid_i8(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let current_item = *item * IntegerTrait::new(1, true); + let fp_current_index = FixedTrait::new_unscaled( + current_item.mag.into(), current_item.sign + ); + let result = FixedTrait::ONE() / (FixedTrait::ONE() + fp_current_index.exp()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap() * IntegerTrait::new(1, true); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = fp_one / (fp_one + fp_current_index.exp()); - data_result.append(result); }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/sigmoid/sigmoid_i8/fp8x23.cairo b/src/operators/nn/functional/sigmoid/sigmoid_i8/fp8x23.cairo index 1e19f6c0f..5e0f1e3d6 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_i8/fp8x23.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_i8/fp8x23.cairo @@ -11,22 +11,24 @@ use orion::numbers::fixed_point::core::{FixedType, FixedTrait}; /// Cf: NNTrait::sigmoid docstring -fn sigmoid_i8(z: @Tensor) -> Tensor { +fn sigmoid_i8(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let current_item = *item * IntegerTrait::new(1, true); + let fp_current_index = FixedTrait::new_unscaled( + current_item.mag.into(), current_item.sign + ); + let result = FixedTrait::ONE() / (FixedTrait::ONE() + fp_current_index.exp()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap() * IntegerTrait::new(1, true); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = fp_one / (fp_one + fp_current_index.exp()); - data_result.append(result); }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); -} + return TensorTrait::::new(z.shape, data_result.span(), z.extra); +} diff --git a/src/operators/nn/functional/sigmoid/sigmoid_u32/core.cairo b/src/operators/nn/functional/sigmoid/sigmoid_u32/core.cairo index 5403fbe2d..f02791963 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_u32/core.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_u32/core.cairo @@ -8,12 +8,12 @@ fn sigmoid_u32(z: @Tensor) -> Option> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::sigmoid_u32(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::sigmoid_u32(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::sigmoid_u32(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::sigmoid_u32(*z)), }, - Option::None(_) => Option::Some(fp16x16::sigmoid_u32(z)), + Option::None(_) => Option::Some(fp16x16::sigmoid_u32(*z)), }, - Option::None(_) => Option::Some(fp16x16::sigmoid_u32(z)), + Option::None(_) => Option::Some(fp16x16::sigmoid_u32(*z)), } } diff --git a/src/operators/nn/functional/sigmoid/sigmoid_u32/fp16x16.cairo b/src/operators/nn/functional/sigmoid/sigmoid_u32/fp16x16.cairo index 7853004b5..256d37458 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_u32/fp16x16.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_u32/fp16x16.cairo @@ -3,32 +3,35 @@ use array::ArrayTrait; use array::SpanTrait; use option::OptionTrait; -use orion::numbers::fixed_point::implementations::fp16x16::core::{FP16x16Impl, FP16x16Add, FP16x16Div}; +use orion::numbers::fixed_point::implementations::fp16x16::core::{ + FP16x16Impl, FP16x16Add, FP16x16Div +}; use orion::operators::tensor::core::{Tensor, TensorTrait}; use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::core::{FixedType, FixedTrait}; /// Cf: NNTrait::sigmoid docstring -fn sigmoid_u32(z: @Tensor) -> Tensor { +fn sigmoid_u32(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); - loop { - if data.len() == 0 { - break (); - }; - - let current_index = *data.pop_front().unwrap(); - let neg_fp_current_index = if current_index == 0 { - FixedTrait::new(0, false) - } else { - FixedTrait::new_unscaled(current_index.into(), true) + loop { + match z.data.pop_front() { + Option::Some(item) => { + let neg_fp_current_item = if *item == 0 { + FixedTrait::new(0, false) + } else { + FixedTrait::new_unscaled((*item).into(), true) + }; + let result = FixedTrait::ONE() / (FixedTrait::ONE() + neg_fp_current_item.exp()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - let result = fp_one / (fp_one + neg_fp_current_index.exp()); - data_result.append(result); }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/sigmoid/sigmoid_u32/fp8x23.cairo b/src/operators/nn/functional/sigmoid/sigmoid_u32/fp8x23.cairo index 72ce07898..fae2e3d6e 100644 --- a/src/operators/nn/functional/sigmoid/sigmoid_u32/fp8x23.cairo +++ b/src/operators/nn/functional/sigmoid/sigmoid_u32/fp8x23.cairo @@ -10,25 +10,26 @@ use orion::numbers::fixed_point::core::{FixedType, FixedTrait}; /// Cf: NNTrait::sigmoid docstring -fn sigmoid_u32(z: @Tensor) -> Tensor { +fn sigmoid_u32(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); - loop { - if data.len() == 0 { - break (); - }; - - let current_index = *data.pop_front().unwrap(); - let neg_fp_current_index = if current_index == 0 { - FixedTrait::new(0, false) - } else { - FixedTrait::new_unscaled(current_index.into(), true) + loop { + match z.data.pop_front() { + Option::Some(item) => { + let neg_fp_current_item = if *item == 0 { + FixedTrait::new(0, false) + } else { + FixedTrait::new_unscaled((*item).into(), true) + }; + let result = FixedTrait::ONE() / (FixedTrait::ONE() + neg_fp_current_item.exp()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - let result = fp_one / (fp_one + neg_fp_current_index.exp()); - data_result.append(result); }; - return TensorTrait::::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::::new(z.shape, data_result.span(), z.extra); } From 01b4a74b0bf69edde58292dbe292334808f37bb5 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 10:27:51 +0300 Subject: [PATCH 22/30] refactor softplus --- .../softplus/softplus_fp/core.cairo | 8 ++--- .../softplus/softplus_fp/fp16x16.cairo | 26 +++++++++------- .../softplus/softplus_fp/fp8x23.cairo | 22 +++++++------- .../softplus/softplus_i32/core.cairo | 8 ++--- .../softplus/softplus_i32/fp16x16.cairo | 30 ++++++++++--------- .../softplus/softplus_i32/fp8x23.cairo | 26 ++++++++-------- .../softplus/softplus_i8/core.cairo | 8 ++--- .../softplus/softplus_i8/fp16x16.cairo | 26 ++++++++-------- .../softplus/softplus_i8/fp8x23.cairo | 26 ++++++++-------- .../softplus/softplus_u32/core.cairo | 8 ++--- .../softplus/softplus_u32/fp16x16.cairo | 28 +++++++++-------- .../softplus/softplus_u32/fp8x23.cairo | 25 ++++++++-------- 12 files changed, 127 insertions(+), 114 deletions(-) diff --git a/src/operators/nn/functional/softplus/softplus_fp/core.cairo b/src/operators/nn/functional/softplus/softplus_fp/core.cairo index 753f02ed2..5bcf692d4 100644 --- a/src/operators/nn/functional/softplus/softplus_fp/core.cairo +++ b/src/operators/nn/functional/softplus/softplus_fp/core.cairo @@ -10,12 +10,12 @@ fn softplus_fp(z: @Tensor) -> Option::> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::softplus(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::softplus(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::softplus(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::softplus(*z)), }, - Option::None(_) => Option::Some(fp16x16::softplus(z)), + Option::None(_) => Option::Some(fp16x16::softplus((*z))) }, - Option::None(_) => Option::Some(fp16x16::softplus(z)), + Option::None(_) => Option::Some(fp16x16::softplus(*z)), } } diff --git a/src/operators/nn/functional/softplus/softplus_fp/fp16x16.cairo b/src/operators/nn/functional/softplus/softplus_fp/fp16x16.cairo index 230b9e4bf..786d95e4c 100644 --- a/src/operators/nn/functional/softplus/softplus_fp/fp16x16.cairo +++ b/src/operators/nn/functional/softplus/softplus_fp/fp16x16.cairo @@ -6,23 +6,27 @@ use option::OptionTrait; use orion::operators::tensor::core::{Tensor, TensorTrait}; use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::core::{FixedType, FixedTrait}; -use orion::numbers::fixed_point::implementations::fp16x16::core::{FP16x16Impl, FP16x16Add, FP16x16Div}; +use orion::numbers::fixed_point::implementations::fp16x16::core::{ + FP16x16Impl, FP16x16Add, FP16x16Div +}; /// Cf: NNTrait::softplus docstring -fn softplus(z: @Tensor) -> Tensor { +fn softplus(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let result = (FixedTrait::ONE() + (*item).exp()).ln(); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let result = (fp_one + current_index.exp()).ln(); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softplus/softplus_fp/fp8x23.cairo b/src/operators/nn/functional/softplus/softplus_fp/fp8x23.cairo index f45b916ec..230deeecc 100644 --- a/src/operators/nn/functional/softplus/softplus_fp/fp8x23.cairo +++ b/src/operators/nn/functional/softplus/softplus_fp/fp8x23.cairo @@ -10,19 +10,21 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::{FP8x23Impl, FP8 /// Cf: NNTrait::softplus docstring -fn softplus(z: @Tensor) -> Tensor { +fn softplus(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let result = (FixedTrait::ONE() + (*item).exp()).ln(); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let result = (fp_one + current_index.exp()).ln(); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softplus/softplus_i32/core.cairo b/src/operators/nn/functional/softplus/softplus_i32/core.cairo index 98dd95553..cd852b8f7 100644 --- a/src/operators/nn/functional/softplus/softplus_i32/core.cairo +++ b/src/operators/nn/functional/softplus/softplus_i32/core.cairo @@ -11,12 +11,12 @@ fn softplus_i32(z: @Tensor) -> Option::> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::softplus(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::softplus(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::softplus(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::softplus(*z)), }, - Option::None(_) => Option::Some(fp16x16::softplus(z)), + Option::None(_) => Option::Some(fp16x16::softplus(*z)), }, - Option::None(_) => Option::Some(fp16x16::softplus(z)), + Option::None(_) => Option::Some(fp16x16::softplus(*z)), } } diff --git a/src/operators/nn/functional/softplus/softplus_i32/fp16x16.cairo b/src/operators/nn/functional/softplus/softplus_i32/fp16x16.cairo index b4317b604..827d090b8 100644 --- a/src/operators/nn/functional/softplus/softplus_i32/fp16x16.cairo +++ b/src/operators/nn/functional/softplus/softplus_i32/fp16x16.cairo @@ -7,26 +7,28 @@ use orion::numbers::signed_integer::i32::i32; use orion::operators::tensor::core::{Tensor, TensorTrait}; use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::core::{FixedType, FixedTrait}; -use orion::numbers::fixed_point::implementations::fp16x16::core::{FP16x16Impl, FP16x16Add, FP16x16Div}; +use orion::numbers::fixed_point::implementations::fp16x16::core::{ + FP16x16Impl, FP16x16Add, FP16x16Div +}; /// Cf: NNTrait::softplus docstring -fn softplus(z: @Tensor) -> Tensor { +fn softplus(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled(*item.mag, *item.sign); + let result = (FixedTrait::ONE() + fp_current_item.exp()).ln(); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index: FixedType = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = (fp_one + fp_current_index.exp()).ln(); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softplus/softplus_i32/fp8x23.cairo b/src/operators/nn/functional/softplus/softplus_i32/fp8x23.cairo index 0aa49bcca..ecdbbbebc 100644 --- a/src/operators/nn/functional/softplus/softplus_i32/fp8x23.cairo +++ b/src/operators/nn/functional/softplus/softplus_i32/fp8x23.cairo @@ -11,22 +11,22 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::{FP8x23Impl, FP8 /// Cf: NNTrait::softplus docstring -fn softplus(z: @Tensor) -> Tensor { +fn softplus(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled(*item.mag, *item.sign); + let result = (FixedTrait::ONE() + fp_current_item.exp()).ln(); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index: FixedType = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = (fp_one + fp_current_index.exp()).ln(); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softplus/softplus_i8/core.cairo b/src/operators/nn/functional/softplus/softplus_i8/core.cairo index 15ad9c1cb..428df1100 100644 --- a/src/operators/nn/functional/softplus/softplus_i8/core.cairo +++ b/src/operators/nn/functional/softplus/softplus_i8/core.cairo @@ -11,12 +11,12 @@ fn softplus_i8(z: @Tensor) -> Option::> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::softplus(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::softplus(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::softplus(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::softplus(*z)), }, - Option::None(_) => Option::Some(fp16x16::softplus(z)), + Option::None(_) => Option::Some(fp16x16::softplus(*z)), }, - Option::None(_) => Option::Some(fp16x16::softplus(z)), + Option::None(_) => Option::Some(fp16x16::softplus(*z)), } } diff --git a/src/operators/nn/functional/softplus/softplus_i8/fp16x16.cairo b/src/operators/nn/functional/softplus/softplus_i8/fp16x16.cairo index a416e9d21..d91b2ac22 100644 --- a/src/operators/nn/functional/softplus/softplus_i8/fp16x16.cairo +++ b/src/operators/nn/functional/softplus/softplus_i8/fp16x16.cairo @@ -11,22 +11,22 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::{FP16x16Impl, F /// Cf: NNTrait::softplus docstring -fn softplus(z: @Tensor) -> Tensor { +fn softplus(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled((*item.mag).into(), *item.sign); + let result = (FixedTrait::ONE() + fp_current_item.exp()).ln(); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index: FixedType = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = (fp_one + fp_current_index.exp()).ln(); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softplus/softplus_i8/fp8x23.cairo b/src/operators/nn/functional/softplus/softplus_i8/fp8x23.cairo index 1aad92934..006fb90d9 100644 --- a/src/operators/nn/functional/softplus/softplus_i8/fp8x23.cairo +++ b/src/operators/nn/functional/softplus/softplus_i8/fp8x23.cairo @@ -11,22 +11,22 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::{FP8x23Impl, FP8 /// Cf: NNTrait::softplus docstring -fn softplus(z: @Tensor) -> Tensor { +fn softplus(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled((*item.mag).into(), *item.sign); + let result = (FixedTrait::ONE() + fp_current_item.exp()).ln(); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index: FixedType = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = (fp_one + fp_current_index.exp()).ln(); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softplus/softplus_u32/core.cairo b/src/operators/nn/functional/softplus/softplus_u32/core.cairo index 525e6e1ff..c79d5bb3b 100644 --- a/src/operators/nn/functional/softplus/softplus_u32/core.cairo +++ b/src/operators/nn/functional/softplus/softplus_u32/core.cairo @@ -10,12 +10,12 @@ fn softplus_u32(z: @Tensor) -> Option::> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::softplus(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::softplus(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::softplus(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::softplus(*z)), }, - Option::None(_) => Option::Some(fp16x16::softplus(z)), + Option::None(_) => Option::Some(fp16x16::softplus(*z)), }, - Option::None(_) => Option::Some(fp16x16::softplus(z)), + Option::None(_) => Option::Some(fp16x16::softplus(*z)), } } diff --git a/src/operators/nn/functional/softplus/softplus_u32/fp16x16.cairo b/src/operators/nn/functional/softplus/softplus_u32/fp16x16.cairo index 23b703af8..68bb7e3a7 100644 --- a/src/operators/nn/functional/softplus/softplus_u32/fp16x16.cairo +++ b/src/operators/nn/functional/softplus/softplus_u32/fp16x16.cairo @@ -7,24 +7,28 @@ use option::OptionTrait; use orion::operators::tensor::core::{Tensor, TensorTrait}; use orion::operators::tensor::implementations::impl_tensor_fp::Tensor_fp; use orion::numbers::fixed_point::core::{FixedType, FixedTrait}; -use orion::numbers::fixed_point::implementations::fp16x16::core::{FP16x16Impl, FP16x16Add, FP16x16Div}; +use orion::numbers::fixed_point::implementations::fp16x16::core::{ + FP16x16Impl, FP16x16Add, FP16x16Div +}; /// Cf: NNTrait::softplus docstring -fn softplus(z: @Tensor) -> Tensor { +fn softplus(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled((*item).into(), false); + let result = (FixedTrait::ONE() + fp_current_item.exp()).ln(); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled(current_index.into(), false); - let result = (fp_one + fp_current_index.exp()).ln(); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softplus/softplus_u32/fp8x23.cairo b/src/operators/nn/functional/softplus/softplus_u32/fp8x23.cairo index 1b0923e10..526e89ad7 100644 --- a/src/operators/nn/functional/softplus/softplus_u32/fp8x23.cairo +++ b/src/operators/nn/functional/softplus/softplus_u32/fp8x23.cairo @@ -11,20 +11,21 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::{FP8x23Impl, FP8 /// Cf: NNTrait::softplus docstring -fn softplus(z: @Tensor) -> Tensor { +fn softplus(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled((*item).into(), false); + let result = (FixedTrait::ONE() + fp_current_item.exp()).ln(); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled(current_index.into(), false); - let result = (fp_one + fp_current_index.exp()).ln(); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); -} + return TensorTrait::new(z.shape, data_result.span(), z.extra); +} From 126db5d848d803ef447c66d76ce3276fa246830b Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 10:36:03 +0300 Subject: [PATCH 23/30] refactor softsign --- .../softsign/softsign_fp/core.cairo | 8 +++--- .../softsign/softsign_fp/fp16x16.cairo | 23 ++++++++-------- .../softsign/softsign_fp/fp8x23.cairo | 22 +++++++++------- .../softsign/softsign_i32/core.cairo | 8 +++--- .../softsign/softsign_i32/fp16x16.cairo | 26 +++++++++---------- .../softsign/softsign_i32/fp8x23.cairo | 26 +++++++++---------- .../softsign/softsign_i8/core.cairo | 8 +++--- .../softsign/softsign_i8/fp16x16.cairo | 26 +++++++++---------- .../softsign/softsign_i8/fp8x23.cairo | 26 +++++++++---------- .../softsign/softsign_u32/core.cairo | 8 +++--- .../softsign/softsign_u32/fp16x16.cairo | 24 +++++++++-------- .../softsign/softsign_u32/fp8x23.cairo | 26 +++++++++---------- 12 files changed, 118 insertions(+), 113 deletions(-) diff --git a/src/operators/nn/functional/softsign/softsign_fp/core.cairo b/src/operators/nn/functional/softsign/softsign_fp/core.cairo index 94548890e..5a4b33428 100644 --- a/src/operators/nn/functional/softsign/softsign_fp/core.cairo +++ b/src/operators/nn/functional/softsign/softsign_fp/core.cairo @@ -10,12 +10,12 @@ fn softsign_fp(z: @Tensor) -> Option> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::softsign(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::softsign(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::softsign(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::softsign(*z)), }, - Option::None(_) => Option::Some(fp16x16::softsign(z)), + Option::None(_) => Option::Some(fp16x16::softsign(*z)), }, - Option::None(_) => Option::Some(fp16x16::softsign(z)), + Option::None(_) => Option::Some(fp16x16::softsign(*z)), } } diff --git a/src/operators/nn/functional/softsign/softsign_fp/fp16x16.cairo b/src/operators/nn/functional/softsign/softsign_fp/fp16x16.cairo index 886cfe02e..d19e16ebf 100644 --- a/src/operators/nn/functional/softsign/softsign_fp/fp16x16.cairo +++ b/src/operators/nn/functional/softsign/softsign_fp/fp16x16.cairo @@ -12,19 +12,20 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::{ /// Cf: NNTrait::softsign docstring -fn softsign(z: @Tensor) -> Tensor { +fn softsign(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let result = *item / (FixedTrait::ONE() + (*item).abs()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let result = current_index / (fp_one + current_index.abs()); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); -} + return TensorTrait::new(z.shape, data_result.span(), z.extra); +} diff --git a/src/operators/nn/functional/softsign/softsign_fp/fp8x23.cairo b/src/operators/nn/functional/softsign/softsign_fp/fp8x23.cairo index ad2caab09..03845f022 100644 --- a/src/operators/nn/functional/softsign/softsign_fp/fp8x23.cairo +++ b/src/operators/nn/functional/softsign/softsign_fp/fp8x23.cairo @@ -10,19 +10,21 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::{FP8x23Impl, FP8 /// Cf: NNTrait::softsign docstring -fn softsign(z: @Tensor) -> Tensor { +fn softsign(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let result = *item / (FixedTrait::ONE() + (*item).abs()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let result = current_index / (fp_one + current_index.abs()); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softsign/softsign_i32/core.cairo b/src/operators/nn/functional/softsign/softsign_i32/core.cairo index 9c80cb25b..458a29977 100644 --- a/src/operators/nn/functional/softsign/softsign_i32/core.cairo +++ b/src/operators/nn/functional/softsign/softsign_i32/core.cairo @@ -11,12 +11,12 @@ fn softsign_i32(z: @Tensor) -> Option> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::softsign(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::softsign(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::softsign(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::softsign(*z)), }, - Option::None(_) => Option::Some(fp16x16::softsign(z)), + Option::None(_) => Option::Some(fp16x16::softsign(*z)), }, - Option::None(_) => Option::Some(fp16x16::softsign(z)), + Option::None(_) => Option::Some(fp16x16::softsign(*z)), } } diff --git a/src/operators/nn/functional/softsign/softsign_i32/fp16x16.cairo b/src/operators/nn/functional/softsign/softsign_i32/fp16x16.cairo index e34000e57..afa00ba2e 100644 --- a/src/operators/nn/functional/softsign/softsign_i32/fp16x16.cairo +++ b/src/operators/nn/functional/softsign/softsign_i32/fp16x16.cairo @@ -13,22 +13,22 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::{ /// Cf: NNTrait::softsign docstring -fn softsign(z: @Tensor) -> Tensor { +fn softsign(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled(*item.mag, *item.sign); + let result = fp_current_item / (FixedTrait::ONE() + fp_current_item.abs()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = fp_current_index / (fp_one + fp_current_index.abs()); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softsign/softsign_i32/fp8x23.cairo b/src/operators/nn/functional/softsign/softsign_i32/fp8x23.cairo index ec06fc48a..409c664db 100644 --- a/src/operators/nn/functional/softsign/softsign_i32/fp8x23.cairo +++ b/src/operators/nn/functional/softsign/softsign_i32/fp8x23.cairo @@ -11,22 +11,22 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::{FP8x23Impl, FP8 /// Cf: NNTrait::softsign docstring -fn softsign(z: @Tensor) -> Tensor { +fn softsign(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled(*item.mag, *item.sign); + let result = fp_current_item / (FixedTrait::ONE() + fp_current_item.abs()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = fp_current_index / (fp_one + fp_current_index.abs()); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softsign/softsign_i8/core.cairo b/src/operators/nn/functional/softsign/softsign_i8/core.cairo index d4a3f1a51..09d7ef6e6 100644 --- a/src/operators/nn/functional/softsign/softsign_i8/core.cairo +++ b/src/operators/nn/functional/softsign/softsign_i8/core.cairo @@ -11,12 +11,12 @@ fn softsign_i8(z: @Tensor) -> Option> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::softsign(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::softsign(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::softsign(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::softsign(*z)), }, - Option::None(_) => Option::Some(fp16x16::softsign(z)), + Option::None(_) => Option::Some(fp16x16::softsign(*z)), }, - Option::None(_) => Option::Some(fp16x16::softsign(z)), + Option::None(_) => Option::Some(fp16x16::softsign(*z)), } } diff --git a/src/operators/nn/functional/softsign/softsign_i8/fp16x16.cairo b/src/operators/nn/functional/softsign/softsign_i8/fp16x16.cairo index 09580df92..de90a9903 100644 --- a/src/operators/nn/functional/softsign/softsign_i8/fp16x16.cairo +++ b/src/operators/nn/functional/softsign/softsign_i8/fp16x16.cairo @@ -13,22 +13,22 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::{ /// Cf: NNTrait::softsign docstring -fn softsign(z: @Tensor) -> Tensor { +fn softsign(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled((*item.mag).into(), *item.sign); + let result = fp_current_item / (FixedTrait::ONE() + fp_current_item.abs()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = fp_current_index / (fp_one + fp_current_index.abs()); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softsign/softsign_i8/fp8x23.cairo b/src/operators/nn/functional/softsign/softsign_i8/fp8x23.cairo index 36d15c2fa..64642f6fa 100644 --- a/src/operators/nn/functional/softsign/softsign_i8/fp8x23.cairo +++ b/src/operators/nn/functional/softsign/softsign_i8/fp8x23.cairo @@ -11,22 +11,22 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::{FP8x23Impl, FP8 /// Cf: NNTrait::softsign docstring -fn softsign(z: @Tensor) -> Tensor { +fn softsign(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled((*item.mag).into(), *item.sign); + let result = fp_current_item / (FixedTrait::ONE() + fp_current_item.abs()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled( - current_index.mag.into(), current_index.sign - ); - let result = fp_current_index / (fp_one + fp_current_index.abs()); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softsign/softsign_u32/core.cairo b/src/operators/nn/functional/softsign/softsign_u32/core.cairo index 63a45407c..b5531ed5b 100644 --- a/src/operators/nn/functional/softsign/softsign_u32/core.cairo +++ b/src/operators/nn/functional/softsign/softsign_u32/core.cairo @@ -10,12 +10,12 @@ fn softsign_u32(z: @Tensor) -> Option::> { match *z.extra { Option::Some(extra_params) => match extra_params.fixed_point { Option::Some(fixed_point) => match fixed_point { - FixedImpl::FP8x23(()) => Option::Some(fp8x23::softsign(z)), - FixedImpl::FP16x16(()) => Option::Some(fp16x16::softsign(z)), + FixedImpl::FP8x23(()) => Option::Some(fp8x23::softsign(*z)), + FixedImpl::FP16x16(()) => Option::Some(fp16x16::softsign(*z)), }, - Option::None(_) => Option::Some(fp16x16::softsign(z)), + Option::None(_) => Option::Some(fp16x16::softsign(*z)), }, - Option::None(_) => Option::Some(fp16x16::softsign(z)), + Option::None(_) => Option::Some(fp16x16::softsign(*z)), } } diff --git a/src/operators/nn/functional/softsign/softsign_u32/fp16x16.cairo b/src/operators/nn/functional/softsign/softsign_u32/fp16x16.cairo index abdc6830e..bdb36d514 100644 --- a/src/operators/nn/functional/softsign/softsign_u32/fp16x16.cairo +++ b/src/operators/nn/functional/softsign/softsign_u32/fp16x16.cairo @@ -12,19 +12,21 @@ use orion::numbers::fixed_point::implementations::fp16x16::core::{ /// Cf: NNTrait::softsign docstring -fn softsign(z: @Tensor) -> Tensor { +fn softsign(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled(*item, false); + let result = fp_current_item / (FixedTrait::ONE() + fp_current_item.abs()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled(current_index.into(), false); - let result = fp_current_index / (fp_one + fp_current_index.abs()); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } diff --git a/src/operators/nn/functional/softsign/softsign_u32/fp8x23.cairo b/src/operators/nn/functional/softsign/softsign_u32/fp8x23.cairo index a20f28b4d..e4fdb7033 100644 --- a/src/operators/nn/functional/softsign/softsign_u32/fp8x23.cairo +++ b/src/operators/nn/functional/softsign/softsign_u32/fp8x23.cairo @@ -10,21 +10,21 @@ use orion::numbers::fixed_point::implementations::fp8x23::core::{FP8x23Impl, FP8 /// Cf: NNTrait::softsign docstring -fn softsign(z: @Tensor) -> Tensor { +fn softsign(mut z: Tensor) -> Tensor { let mut data_result = ArrayTrait::new(); - let mut data = *z.data; - let fp_one = FixedTrait::new_unscaled(1, false); + loop { - if data.len() == 0 { - break (); + match z.data.pop_front() { + Option::Some(item) => { + let fp_current_item = FixedTrait::new_unscaled(*item, false); + let result = fp_current_item / (FixedTrait::ONE() + fp_current_item.abs()); + data_result.append(result); + }, + Option::None(_) => { + break; + } }; - - let current_index = *data.pop_front().unwrap(); - let fp_current_index = FixedTrait::new_unscaled( - current_index.into(), false - ); - let result = fp_current_index / (fp_one + fp_current_index.abs()); - data_result.append(result); }; - return TensorTrait::new(*z.shape, data_result.span(), *z.extra); + + return TensorTrait::new(z.shape, data_result.span(), z.extra); } From 18ca93620cd23f900374d9fc4bda69f3cd808382 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 10:47:40 +0300 Subject: [PATCH 24/30] refactor performance trait --- .../dequantize_linear_fp/fp16x16.cairo | 20 ++++++++++-------- .../dequantize_linear_fp/fp8x23.cairo | 20 ++++++++++-------- .../dequantize_linear_i32.cairo | 21 +++++++++++-------- .../quantize_linear_fp/fp_i8/fp16x16.cairo | 20 ++++++++++-------- .../quantize_linear_fp/fp_i8/fp8x23.cairo | 21 +++++++++++-------- .../quantize_linear/quantize_linear_i32.cairo | 20 ++++++++++-------- 6 files changed, 68 insertions(+), 54 deletions(-) diff --git a/src/performance/functional/dequantize_linear/dequantize_linear_fp/fp16x16.cairo b/src/performance/functional/dequantize_linear/dequantize_linear_fp/fp16x16.cairo index ac5044e2e..292e7114f 100644 --- a/src/performance/functional/dequantize_linear/dequantize_linear_fp/fp16x16.cairo +++ b/src/performance/functional/dequantize_linear/dequantize_linear_fp/fp16x16.cairo @@ -19,7 +19,7 @@ fn dequantize_linear( x: @Tensor, x_scale: @Tensor, x_zero_point: @Tensor ) -> Tensor:: { if (*x_scale.data).len() == 1 && (*x_zero_point.data).len() == 1 { - dequantize_element_wise(x, *x_scale.data[0], *x_zero_point.data[0]) + dequantize_element_wise(*x, *x_scale.data[0], *x_zero_point.data[0]) } else { check_compatibility(*x.shape, *x_scale.shape); check_compatibility(*x.shape, *x_zero_point.shape); @@ -37,21 +37,23 @@ fn dequantize_per_axis( } fn dequantize_element_wise( - x: @Tensor::, x_scale: FixedType, x_zero_point: FixedType + mut x: Tensor::, x_scale: FixedType, x_zero_point: FixedType ) -> Tensor:: { let mut result_data = ArrayTrait::::new(); - let mut data = *x.data; loop { - let dequantized = dequantize(*data.pop_front().unwrap(), x_scale, x_zero_point); - result_data.append(dequantized); - - if data.len() == 0 { - break (); + match x.data.pop_front() { + Option::Some(item) => { + let dequantized = dequantize(*item, x_scale, x_zero_point); + result_data.append(dequantized); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*x.shape, result_data.span(), *x.extra); + return TensorTrait::new(x.shape, result_data.span(), x.extra); } fn dequantize(x: i8, x_scale: FixedType, x_zero_point: FixedType) -> FixedType { diff --git a/src/performance/functional/dequantize_linear/dequantize_linear_fp/fp8x23.cairo b/src/performance/functional/dequantize_linear/dequantize_linear_fp/fp8x23.cairo index 085848451..1ffa2a3e8 100644 --- a/src/performance/functional/dequantize_linear/dequantize_linear_fp/fp8x23.cairo +++ b/src/performance/functional/dequantize_linear/dequantize_linear_fp/fp8x23.cairo @@ -19,7 +19,7 @@ fn dequantize_linear( x: @Tensor, x_scale: @Tensor, x_zero_point: @Tensor ) -> Tensor:: { if (*x_scale.data).len() == 1 && (*x_zero_point.data).len() == 1 { - dequantize_element_wise(x, *x_scale.data[0], *x_zero_point.data[0]) + dequantize_element_wise(*x, *x_scale.data[0], *x_zero_point.data[0]) } else { check_compatibility(*x.shape, *x_scale.shape); check_compatibility(*x.shape, *x_zero_point.shape); @@ -37,21 +37,23 @@ fn dequantize_per_axis( } fn dequantize_element_wise( - x: @Tensor::, x_scale: FixedType, x_zero_point: FixedType + mut x: Tensor::, x_scale: FixedType, x_zero_point: FixedType ) -> Tensor:: { let mut result_data = ArrayTrait::::new(); - let mut data = *x.data; loop { - let dequantized = dequantize(*data.pop_front().unwrap(), x_scale, x_zero_point); - result_data.append(dequantized); - - if data.len() == 0 { - break (); + match x.data.pop_front() { + Option::Some(item) => { + let dequantized = dequantize(*item, x_scale, x_zero_point); + result_data.append(dequantized); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*x.shape, result_data.span(), *x.extra); + return TensorTrait::new(x.shape, result_data.span(), x.extra); } fn dequantize(x: i8, x_scale: FixedType, x_zero_point: FixedType) -> FixedType { diff --git a/src/performance/functional/dequantize_linear/dequantize_linear_i32.cairo b/src/performance/functional/dequantize_linear/dequantize_linear_i32.cairo index beda89dee..8d289c2c7 100644 --- a/src/performance/functional/dequantize_linear/dequantize_linear_i32.cairo +++ b/src/performance/functional/dequantize_linear/dequantize_linear_i32.cairo @@ -17,7 +17,7 @@ fn dequantize_linear( x: @Tensor, x_scale: @Tensor, x_zero_point: @Tensor ) -> Tensor:: { if (*x_scale.data).len() == 1 && (*x_zero_point.data).len() == 1 { - dequantize_element_wise(x, *x_scale.data[0], *x_zero_point.data[0]) + dequantize_element_wise(*x, *x_scale.data[0], *x_zero_point.data[0]) } else { check_compatibility(*x.shape, *x_scale.shape); check_compatibility(*x.shape, *x_zero_point.shape); @@ -34,20 +34,23 @@ fn dequantize_per_axis( (*x - *x_zero_point) * *x_scale } -fn dequantize_element_wise(x: @Tensor::, x_scale: i32, x_zero_point: i32) -> Tensor:: { +fn dequantize_element_wise(mut x: Tensor::, x_scale: i32, x_zero_point: i32) -> Tensor:: { let mut result_data = ArrayTrait::::new(); - let mut data = *x.data; loop { - let dequantized = dequantize(*data.pop_front().unwrap(), x_scale, x_zero_point); - result_data.append(dequantized); - - if data.len() == 0 { - break (); + match x.data.pop_front() { + Option::Some(item) => { + let dequantized = dequantize(*item, x_scale, x_zero_point); + result_data.append(dequantized); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*x.shape, result_data.span(), *x.extra); + + return TensorTrait::new(x.shape, result_data.span(), x.extra); } fn dequantize(x: i8, x_scale: i32, x_zero_point: i32) -> i32 { diff --git a/src/performance/functional/quantize_linear/quantize_linear_fp/fp_i8/fp16x16.cairo b/src/performance/functional/quantize_linear/quantize_linear_fp/fp_i8/fp16x16.cairo index 1d409a3bc..85cf16238 100644 --- a/src/performance/functional/quantize_linear/quantize_linear_fp/fp_i8/fp16x16.cairo +++ b/src/performance/functional/quantize_linear/quantize_linear_fp/fp_i8/fp16x16.cairo @@ -22,7 +22,7 @@ fn quantize_linear( x: @Tensor, y_scale: @Tensor, y_zero_point: @Tensor ) -> Tensor:: { if (*y_scale.data).len() == 1 && (*y_zero_point.data).len() == 1 { - quantize_element_wise(x, *y_scale.data[0], *y_zero_point.data[0]) + quantize_element_wise(*x, *y_scale.data[0], *y_zero_point.data[0]) } else { check_compatibility(*x.shape, *y_scale.shape); check_compatibility(*x.shape, *y_zero_point.shape); @@ -38,21 +38,23 @@ fn quantize_per_axis( } fn quantize_element_wise( - x: @Tensor::, y_scale: FixedType, y_zero_point: FixedType + mut x: Tensor::, y_scale: FixedType, y_zero_point: FixedType ) -> Tensor:: { let mut result_data = ArrayTrait::::new(); - let mut data = *x.data; loop { - let quantized = quantize(*data.pop_front().unwrap(), y_scale, y_zero_point); - result_data.append(quantized.try_into().unwrap()); - - if data.len() == 0 { - break (); + match x.data.pop_front() { + Option::Some(item) => { + let quantized = quantize(*item, y_scale, y_zero_point); + result_data.append(quantized.try_into().unwrap()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*x.shape, result_data.span(), *x.extra); + return TensorTrait::new(x.shape, result_data.span(), x.extra); } fn quantize(x: FixedType, y_scale: FixedType, y_zero_point: FixedType) -> FixedType { diff --git a/src/performance/functional/quantize_linear/quantize_linear_fp/fp_i8/fp8x23.cairo b/src/performance/functional/quantize_linear/quantize_linear_fp/fp_i8/fp8x23.cairo index dbadd4f01..8995acd6a 100644 --- a/src/performance/functional/quantize_linear/quantize_linear_fp/fp_i8/fp8x23.cairo +++ b/src/performance/functional/quantize_linear/quantize_linear_fp/fp_i8/fp8x23.cairo @@ -22,7 +22,7 @@ fn quantize_linear( x: @Tensor, y_scale: @Tensor, y_zero_point: @Tensor ) -> Tensor:: { if (*y_scale.data).len() == 1 && (*y_zero_point.data).len() == 1 { - quantize_element_wise(x, *y_scale.data[0], *y_zero_point.data[0]) + quantize_element_wise(*x, *y_scale.data[0], *y_zero_point.data[0]) } else { check_compatibility(*x.shape, *y_scale.shape); check_compatibility(*x.shape, *y_zero_point.shape); @@ -38,23 +38,26 @@ fn quantize_per_axis( } fn quantize_element_wise( - x: @Tensor::, y_scale: FixedType, y_zero_point: FixedType + mut x: Tensor::, y_scale: FixedType, y_zero_point: FixedType ) -> Tensor:: { let mut result_data = ArrayTrait::::new(); - let mut data = *x.data; loop { - let quantized = quantize(*data.pop_front().unwrap(), y_scale, y_zero_point); - result_data.append(quantized.try_into().unwrap()); - - if data.len() == 0 { - break (); + match x.data.pop_front() { + Option::Some(item) => { + let quantized = quantize(*item, y_scale, y_zero_point); + result_data.append(quantized.try_into().unwrap()); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*x.shape, result_data.span(), *x.extra); + return TensorTrait::new(x.shape, result_data.span(), x.extra); } + fn quantize(x: FixedType, y_scale: FixedType, y_zero_point: FixedType) -> FixedType { saturate( FixedTrait::new_unscaled(128, true), diff --git a/src/performance/functional/quantize_linear/quantize_linear_i32.cairo b/src/performance/functional/quantize_linear/quantize_linear_i32.cairo index 8b4ef778d..f98be60a2 100644 --- a/src/performance/functional/quantize_linear/quantize_linear_i32.cairo +++ b/src/performance/functional/quantize_linear/quantize_linear_i32.cairo @@ -17,7 +17,7 @@ fn quantize_linear( x: @Tensor, y_scale: @Tensor, y_zero_point: @Tensor ) -> Tensor:: { if (*y_scale.data).len() == 1 && (*y_zero_point.data).len() == 1 { - quantize_element_wise(x, *y_scale.data[0], *y_zero_point.data[0]) + quantize_element_wise(*x, *y_scale.data[0], *y_zero_point.data[0]) } else { check_compatibility(*x.shape, *y_scale.shape); check_compatibility(*x.shape, *y_zero_point.shape); @@ -32,20 +32,22 @@ fn quantize_per_axis( saturated_add_i8(@(*x / *y_scale), y_zero_point) } -fn quantize_element_wise(x: @Tensor::, y_scale: i32, y_zero_point: i32) -> Tensor:: { +fn quantize_element_wise(mut x: Tensor::, y_scale: i32, y_zero_point: i32) -> Tensor:: { let mut result_data = ArrayTrait::::new(); - let mut data = *x.data; loop { - let quantized = quantize(*data.pop_front().unwrap(), y_scale, y_zero_point); - result_data.append(quantized); - - if data.len() == 0 { - break (); + match x.data.pop_front() { + Option::Some(item) => { + let quantized = quantize(*item, y_scale, y_zero_point); + result_data.append(quantized); + }, + Option::None(_) => { + break; + } }; }; - return TensorTrait::new(*x.shape, result_data.span(), *x.extra); + return TensorTrait::new(x.shape, result_data.span(), x.extra); } fn quantize(x: i32, y_scale: i32, y_zero_point: i32) -> i8 { From 292dbd407e5b425bdf3c8fac7e260c43a4d516e3 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 10:48:43 +0300 Subject: [PATCH 25/30] Update CHANGELOG.md --- docs/CHANGELOG.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/CHANGELOG.md b/docs/CHANGELOG.md index d88abd201..ad67051cf 100644 --- a/docs/CHANGELOG.md +++ b/docs/CHANGELOG.md @@ -4,6 +4,11 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [Unreleased] - 2023-08-24 + +## Changed +Refactored Loops to use match to break loops. + ## [Unreleased] - 2023-08-16 ## Changed From bcaf7fdc406268b0d9df95b57ba44169093233d7 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 10:58:06 +0300 Subject: [PATCH 26/30] refactor matmul --- .../tensor/linalg/matmul/helpers.cairo | 27 +++++++++++++------ .../linalg/matmul/matmul_fp/fp16x16.cairo | 15 ++++++----- .../linalg/matmul/matmul_fp/fp8x23.cairo | 19 ++++++++----- .../tensor/linalg/matmul/matmul_i32.cairo | 15 ++++++----- .../tensor/linalg/matmul/matmul_i8.cairo | 15 ++++++----- .../tensor/linalg/matmul/matmul_u32.cairo | 15 ++++++----- 6 files changed, 67 insertions(+), 39 deletions(-) diff --git a/src/operators/tensor/linalg/matmul/helpers.cairo b/src/operators/tensor/linalg/matmul/helpers.cairo index f82a559ef..60cf1167b 100644 --- a/src/operators/tensor/linalg/matmul/helpers.cairo +++ b/src/operators/tensor/linalg/matmul/helpers.cairo @@ -27,23 +27,34 @@ fn prepare_shape_for_matmul(mut shape: Span, is_first_tensor: bool) -> Sp // Prepend 1 to shape if it's 1-dimensional let mut shape_adjusted = ArrayTrait::new(); shape_adjusted.append(1); + loop { - if shape.len() == 0 { - break (); - } - shape_adjusted.append(*shape.pop_front().unwrap()); + match shape.pop_front() { + Option::Some(item) => { + shape_adjusted.append(*item); + }, + Option::None(_) => { + break; + } + }; }; return shape_adjusted.span(); } else if ndim == 1 && !is_first_tensor { // Append 1 to shape if it's 1-dimensional let mut shape_adjusted = ArrayTrait::new(); + loop { - if shape.len() == 0 { - break (); - } - shape_adjusted.append(*shape.pop_front().unwrap()); + match shape.pop_front() { + Option::Some(item) => { + shape_adjusted.append(*item); + }, + Option::None(_) => { + break; + } + }; }; + shape_adjusted.append(1); return shape_adjusted.span(); diff --git a/src/operators/tensor/linalg/matmul/matmul_fp/fp16x16.cairo b/src/operators/tensor/linalg/matmul/matmul_fp/fp16x16.cairo index e28315d2e..84168b030 100644 --- a/src/operators/tensor/linalg/matmul/matmul_fp/fp16x16.cairo +++ b/src/operators/tensor/linalg/matmul/matmul_fp/fp16x16.cairo @@ -60,12 +60,15 @@ fn dot_product(mut vec1: Span, mut vec2: Span) -> FixedTyp let mut result: FixedType = FixedTrait::new_unscaled(0, false); loop { - if vec1.len() == 0 { - break (); - } - - let element_product = *vec1.pop_front().unwrap() * *vec2.pop_front().unwrap(); - result += element_product; + match vec1.pop_front() { + Option::Some(vec1_item) => { + let element_product = *vec1_item * *vec2.pop_front().unwrap(); + result += element_product; + }, + Option::None(_) => { + break; + } + }; }; return result; diff --git a/src/operators/tensor/linalg/matmul/matmul_fp/fp8x23.cairo b/src/operators/tensor/linalg/matmul/matmul_fp/fp8x23.cairo index 4625cc52a..6588aebad 100644 --- a/src/operators/tensor/linalg/matmul/matmul_fp/fp8x23.cairo +++ b/src/operators/tensor/linalg/matmul/matmul_fp/fp8x23.cairo @@ -4,7 +4,9 @@ use option::OptionTrait; use orion::operators::tensor::implementations::impl_tensor_fp::{Tensor_fp}; -use orion::numbers::fixed_point::implementations::fp8x23::core::{FP8x23Impl, FP8x23Mul, FP8x23AddEq}; +use orion::numbers::fixed_point::implementations::fp8x23::core::{ + FP8x23Impl, FP8x23Mul, FP8x23AddEq +}; use orion::numbers::fixed_point::core::{FixedTrait, FixedType}; use orion::operators::tensor::core::{Tensor, ExtraParams, TensorTrait}; use orion::operators::tensor::linalg::matmul::helpers::{ @@ -58,12 +60,15 @@ fn dot_product(mut vec1: Span, mut vec2: Span) -> FixedTyp let mut result: FixedType = FixedTrait::new_unscaled(0, false); loop { - if vec1.len() == 0 { - break (); - } - - let element_product = *vec1.pop_front().unwrap() * *vec2.pop_front().unwrap(); - result += element_product; + match vec1.pop_front() { + Option::Some(vec1_item) => { + let element_product = *vec1_item * *vec2.pop_front().unwrap(); + result += element_product; + }, + Option::None(_) => { + break; + } + }; }; return result; diff --git a/src/operators/tensor/linalg/matmul/matmul_i32.cairo b/src/operators/tensor/linalg/matmul/matmul_i32.cairo index d0f570f90..f8a222f23 100644 --- a/src/operators/tensor/linalg/matmul/matmul_i32.cairo +++ b/src/operators/tensor/linalg/matmul/matmul_i32.cairo @@ -57,12 +57,15 @@ fn dot_product(mut vec1: Span, mut vec2: Span) -> i32 { let mut result: i32 = IntegerTrait::new(0, false); loop { - if vec1.len() == 0 { - break (); - } - - let element_product = *vec1.pop_front().unwrap() * *vec2.pop_front().unwrap(); - result += element_product; + match vec1.pop_front() { + Option::Some(vec1_item) => { + let element_product = *vec1_item * *vec2.pop_front().unwrap(); + result += element_product; + }, + Option::None(_) => { + break; + } + }; }; return result; diff --git a/src/operators/tensor/linalg/matmul/matmul_i8.cairo b/src/operators/tensor/linalg/matmul/matmul_i8.cairo index 919031b41..7a5ff68b1 100644 --- a/src/operators/tensor/linalg/matmul/matmul_i8.cairo +++ b/src/operators/tensor/linalg/matmul/matmul_i8.cairo @@ -57,12 +57,15 @@ fn dot_product(mut vec1: Span, mut vec2: Span) -> i8 { let mut result: i8 = IntegerTrait::new(0, false); loop { - if vec1.len() == 0 { - break (); - } - - let element_product = *vec1.pop_front().unwrap() * *vec2.pop_front().unwrap(); - result += element_product; + match vec1.pop_front() { + Option::Some(vec1_item) => { + let element_product = *vec1_item * *vec2.pop_front().unwrap(); + result += element_product; + }, + Option::None(_) => { + break; + } + }; }; return result; diff --git a/src/operators/tensor/linalg/matmul/matmul_u32.cairo b/src/operators/tensor/linalg/matmul/matmul_u32.cairo index 297bbce62..2b230e91b 100644 --- a/src/operators/tensor/linalg/matmul/matmul_u32.cairo +++ b/src/operators/tensor/linalg/matmul/matmul_u32.cairo @@ -56,12 +56,15 @@ fn dot_product(mut vec1: Span, mut vec2: Span) -> u32 { let mut result: u32 = 0; loop { - if vec1.len() == 0 { - break (); - } - - let element_product = *vec1.pop_front().unwrap() * *vec2.pop_front().unwrap(); - result += element_product; + match vec1.pop_front() { + Option::Some(vec1_item) => { + let element_product = *vec1_item * *vec2.pop_front().unwrap(); + result += element_product; + }, + Option::None(_) => { + break; + } + }; }; return result; From 1cdde856b4d1f3d8df3b470bb1878ed4c32e0b5c Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 21:20:24 +0300 Subject: [PATCH 27/30] check if zero --- src/numbers/signed_integer/i32.cairo | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/src/numbers/signed_integer/i32.cairo b/src/numbers/signed_integer/i32.cairo index 78bbfdd74..bd8c0b067 100644 --- a/src/numbers/signed_integer/i32.cairo +++ b/src/numbers/signed_integer/i32.cairo @@ -285,7 +285,7 @@ fn i32_div(a: i32, b: i32) -> i32 { if (sign == false) { // If the operands are positive, the quotient is simply their absolute value quotient. - return IntegerTrait::new(a.mag / b.mag, sign); + return check_if_zero(a.mag / b.mag, sign); } // If the operands have different signs, rounding is necessary. @@ -295,7 +295,7 @@ fn i32_div(a: i32, b: i32) -> i32 { if (quotient == 0) { return IntegerTrait::new(quotient, false); } - return IntegerTrait::new(quotient, sign); + return check_if_zero(quotient, sign); } // If the quotient is not an integer, multiply the dividend by 10 to move the decimal point over. @@ -303,14 +303,14 @@ fn i32_div(a: i32, b: i32) -> i32 { let last_digit = quotient % 10; if (quotient == 0) { - return IntegerTrait::new(quotient, false); + return check_if_zero(quotient, false); } // Check the last digit to determine rounding direction. if (last_digit <= 5) { - return IntegerTrait::new(quotient / 10, sign); + return check_if_zero(quotient / 10, sign); } else { - return IntegerTrait::new((quotient / 10) + 1, sign); + return check_if_zero((quotient / 10) + 1, sign); } } @@ -482,3 +482,11 @@ fn i8_try_from_i32(x: i32) -> Option { Option::None(_) => Option::None(()) } } + +fn check_if_zero(mag: u32, sign: bool) -> i32 { + if mag == 0 { + IntegerTrait::::new(mag, false) + } else { + IntegerTrait::::new(mag, sign) + } +} From cc8eb8a76d302ae2cc178eb24f98379ab7eb7dc5 Mon Sep 17 00:00:00 2001 From: raphaelDkhn Date: Thu, 24 Aug 2023 21:35:36 +0300 Subject: [PATCH 28/30] fix other signed int type --- src/numbers/signed_integer/i128.cairo | 26 +++++++++++++++++--------- src/numbers/signed_integer/i16.cairo | 26 +++++++++++++++++--------- src/numbers/signed_integer/i32.cairo | 24 ++++++++++++------------ src/numbers/signed_integer/i64.cairo | 26 +++++++++++++++++--------- src/numbers/signed_integer/i8.cairo | 25 ++++++++++++++++--------- 5 files changed, 79 insertions(+), 48 deletions(-) diff --git a/src/numbers/signed_integer/i128.cairo b/src/numbers/signed_integer/i128.cairo index 908d9d20f..6889e3550 100644 --- a/src/numbers/signed_integer/i128.cairo +++ b/src/numbers/signed_integer/i128.cairo @@ -198,7 +198,7 @@ fn i128_add(a: i128, b: i128) -> i128 { if (sum == 0_u128) { return IntegerTrait::new(sum, false); } - return IntegerTrait::new(sum, a.sign); + return ensure_non_negative_zero(sum, a.sign); } else { // If the integers have different signs, // the larger absolute value is subtracted from the smaller one. @@ -212,7 +212,7 @@ fn i128_add(a: i128, b: i128) -> i128 { if (difference == 0_u128) { return IntegerTrait::new(difference, false); } - return IntegerTrait::new(difference, larger.sign); + return ensure_non_negative_zero(difference, larger.sign); } } @@ -231,7 +231,7 @@ fn i128_sub(a: i128, b: i128) -> i128 { } // The subtraction of `a` to `b` is achieved by negating `b` sign and adding it to `a`. - let neg_b = IntegerTrait::new(b.mag, !b.sign); + let neg_b = ensure_non_negative_zero(b.mag, !b.sign); return a + neg_b; } @@ -258,7 +258,7 @@ fn i128_mul(a: i128, b: i128) -> i128 { return IntegerTrait::new(mag, false); } - return IntegerTrait::new(mag, sign); + return ensure_non_negative_zero(mag, sign); } // Divides the first i128 by the second i128. @@ -277,7 +277,7 @@ fn i128_div(a: i128, b: i128) -> i128 { if (sign == false) { // If the operands are positive, the quotient is simply their absolute value quotient. - return IntegerTrait::new(a.mag / b.mag, sign); + return ensure_non_negative_zero(a.mag / b.mag, sign); } // If the operands have different signs, rounding is necessary. @@ -287,7 +287,7 @@ fn i128_div(a: i128, b: i128) -> i128 { if (quotient == 0_u128) { return IntegerTrait::new(quotient, false); } - return IntegerTrait::new(quotient, sign); + return ensure_non_negative_zero(quotient, sign); } // If the quotient is not an integer, multiply the dividend by 10 to move the decimal point over. @@ -300,9 +300,9 @@ fn i128_div(a: i128, b: i128) -> i128 { // Check the last digit to determine rounding direction. if (last_digit <= 5_u128) { - return IntegerTrait::new(quotient / 10_u128, sign); + return ensure_non_negative_zero(quotient / 10_u128, sign); } else { - return IntegerTrait::new((quotient / 10_u128) + 1_u128, sign); + return ensure_non_negative_zero((quotient / 10_u128) + 1_u128, sign); } } @@ -426,7 +426,7 @@ fn i128_ge(a: i128, b: i128) -> bool { // * `i128` - The negation of `x`. fn i128_neg(x: i128) -> i128 { // The negation of an integer is obtained by flipping its sign. - return IntegerTrait::new(x.mag, !x.sign); + return ensure_non_negative_zero(x.mag, !x.sign); } /// Cf: IntegerTrait::abs docstring @@ -451,3 +451,11 @@ fn i128_min(a: i128, b: i128) -> i128 { return b; } } + +fn ensure_non_negative_zero(mag: u128, sign: bool) -> i128 { + if mag == 0 { + IntegerTrait::::new(mag, false) + } else { + IntegerTrait::::new(mag, sign) + } +} diff --git a/src/numbers/signed_integer/i16.cairo b/src/numbers/signed_integer/i16.cairo index c737638b9..3689f150a 100644 --- a/src/numbers/signed_integer/i16.cairo +++ b/src/numbers/signed_integer/i16.cairo @@ -198,7 +198,7 @@ fn i16_add(a: i16, b: i16) -> i16 { if (sum == 0_u16) { return IntegerTrait::new(sum, false); } - return IntegerTrait::new(sum, a.sign); + return ensure_non_negative_zero(sum, a.sign); } else { // If the integers have different signs, // the larger absolute value is subtracted from the smaller one. @@ -212,7 +212,7 @@ fn i16_add(a: i16, b: i16) -> i16 { if (difference == 0_u16) { return IntegerTrait::new(difference, false); } - return IntegerTrait::new(difference, larger.sign); + return ensure_non_negative_zero(difference, larger.sign); } } @@ -231,7 +231,7 @@ fn i16_sub(a: i16, b: i16) -> i16 { } // The subtraction of `a` to `b` is achieved by negating `b` sign and adding it to `a`. - let neg_b = IntegerTrait::new(b.mag, !b.sign); + let neg_b = ensure_non_negative_zero(b.mag, !b.sign); return a + neg_b; } @@ -258,7 +258,7 @@ fn i16_mul(a: i16, b: i16) -> i16 { return IntegerTrait::new(mag, false); } - return IntegerTrait::new(mag, sign); + return ensure_non_negative_zero(mag, sign); } // Divides the first i16 by the second i16. @@ -277,7 +277,7 @@ fn i16_div(a: i16, b: i16) -> i16 { if (sign == false) { // If the operands are positive, the quotient is simply their absolute value quotient. - return IntegerTrait::new(a.mag / b.mag, sign); + return ensure_non_negative_zero(a.mag / b.mag, sign); } // If the operands have different signs, rounding is necessary. @@ -287,7 +287,7 @@ fn i16_div(a: i16, b: i16) -> i16 { if (quotient == 0_u16) { return IntegerTrait::new(quotient, false); } - return IntegerTrait::new(quotient, sign); + return ensure_non_negative_zero(quotient, sign); } // If the quotient is not an integer, multiply the dividend by 10 to move the decimal point over. @@ -300,9 +300,9 @@ fn i16_div(a: i16, b: i16) -> i16 { // Check the last digit to determine rounding direction. if (last_digit <= 5_u16) { - return IntegerTrait::new(quotient / 10_u16, sign); + return ensure_non_negative_zero(quotient / 10_u16, sign); } else { - return IntegerTrait::new((quotient / 10_u16) + 1_u16, sign); + return ensure_non_negative_zero((quotient / 10_u16) + 1_u16, sign); } } @@ -426,7 +426,7 @@ fn i16_ge(a: i16, b: i16) -> bool { // * `i16` - The negation of `x`. fn i16_neg(x: i16) -> i16 { // The negation of an integer is obtained by flipping its sign. - return IntegerTrait::new(x.mag, !x.sign); + return ensure_non_negative_zero(x.mag, !x.sign); } /// Cf: IntegerTrait::abs docstring @@ -451,3 +451,11 @@ fn i16_min(a: i16, b: i16) -> i16 { return b; } } + +fn ensure_non_negative_zero(mag: u16, sign: bool) -> i16 { + if mag == 0 { + IntegerTrait::::new(mag, false) + } else { + IntegerTrait::::new(mag, sign) + } +} diff --git a/src/numbers/signed_integer/i32.cairo b/src/numbers/signed_integer/i32.cairo index bd8c0b067..04a2ca148 100644 --- a/src/numbers/signed_integer/i32.cairo +++ b/src/numbers/signed_integer/i32.cairo @@ -206,7 +206,7 @@ fn i32_add(a: i32, b: i32) -> i32 { if (sum == 0) { return IntegerTrait::new(sum, false); } - return IntegerTrait::new(sum, a.sign); + return ensure_non_negative_zero(sum, a.sign); } else { // If the integers have different signs, // the larger absolute value is subtracted from the smaller one. @@ -220,7 +220,7 @@ fn i32_add(a: i32, b: i32) -> i32 { if (difference == 0) { return IntegerTrait::new(difference, false); } - return IntegerTrait::new(difference, larger.sign); + return ensure_non_negative_zero(difference, larger.sign); } } @@ -239,7 +239,7 @@ fn i32_sub(a: i32, b: i32) -> i32 { } // The subtraction of `a` to `b` is achieved by negating `b` sign and adding it to `a`. - let neg_b = IntegerTrait::new(b.mag, !b.sign); + let neg_b = ensure_non_negative_zero(b.mag, !b.sign); return a + neg_b; } @@ -266,7 +266,7 @@ fn i32_mul(a: i32, b: i32) -> i32 { return IntegerTrait::new(mag, false); } - return IntegerTrait::new(mag, sign); + return ensure_non_negative_zero(mag, sign); } // Divides the first i32 by the second i32. @@ -285,7 +285,7 @@ fn i32_div(a: i32, b: i32) -> i32 { if (sign == false) { // If the operands are positive, the quotient is simply their absolute value quotient. - return check_if_zero(a.mag / b.mag, sign); + return ensure_non_negative_zero(a.mag / b.mag, sign); } // If the operands have different signs, rounding is necessary. @@ -295,7 +295,7 @@ fn i32_div(a: i32, b: i32) -> i32 { if (quotient == 0) { return IntegerTrait::new(quotient, false); } - return check_if_zero(quotient, sign); + return ensure_non_negative_zero(quotient, sign); } // If the quotient is not an integer, multiply the dividend by 10 to move the decimal point over. @@ -303,14 +303,14 @@ fn i32_div(a: i32, b: i32) -> i32 { let last_digit = quotient % 10; if (quotient == 0) { - return check_if_zero(quotient, false); + return ensure_non_negative_zero(quotient, false); } // Check the last digit to determine rounding direction. if (last_digit <= 5) { - return check_if_zero(quotient / 10, sign); + return ensure_non_negative_zero(quotient / 10, sign); } else { - return check_if_zero((quotient / 10) + 1, sign); + return ensure_non_negative_zero((quotient / 10) + 1, sign); } } @@ -434,7 +434,7 @@ fn i32_ge(a: i32, b: i32) -> bool { // * `i32` - The negation of `x`. fn i32_neg(x: i32) -> i32 { // The negation of an integer is obtained by flipping its sign. - return IntegerTrait::new(x.mag, !x.sign); + return ensure_non_negative_zero(x.mag, !x.sign); } /// Cf: IntegerTrait::abs docstring @@ -483,10 +483,10 @@ fn i8_try_from_i32(x: i32) -> Option { } } -fn check_if_zero(mag: u32, sign: bool) -> i32 { +fn ensure_non_negative_zero(mag: u32, sign: bool) -> i32 { if mag == 0 { IntegerTrait::::new(mag, false) } else { IntegerTrait::::new(mag, sign) } -} +} \ No newline at end of file diff --git a/src/numbers/signed_integer/i64.cairo b/src/numbers/signed_integer/i64.cairo index 6f1657067..070bc3133 100644 --- a/src/numbers/signed_integer/i64.cairo +++ b/src/numbers/signed_integer/i64.cairo @@ -198,7 +198,7 @@ fn i64_add(a: i64, b: i64) -> i64 { if (sum == 0_u64) { return IntegerTrait::new(sum, false); } - return IntegerTrait::new(sum, a.sign); + return ensure_non_negative_zero(sum, a.sign); } else { // If the integers have different signs, // the larger absolute value is subtracted from the smaller one. @@ -212,7 +212,7 @@ fn i64_add(a: i64, b: i64) -> i64 { if (difference == 0_u64) { return IntegerTrait::new(difference, false); } - return IntegerTrait::new(difference, larger.sign); + return ensure_non_negative_zero(difference, larger.sign); } } @@ -231,7 +231,7 @@ fn i64_sub(a: i64, b: i64) -> i64 { } // The subtraction of `a` to `b` is achieved by negating `b` sign and adding it to `a`. - let neg_b = IntegerTrait::new(b.mag, !b.sign); + let neg_b = ensure_non_negative_zero(b.mag, !b.sign); return a + neg_b; } @@ -258,7 +258,7 @@ fn i64_mul(a: i64, b: i64) -> i64 { return IntegerTrait::new(mag, false); } - return IntegerTrait::new(mag, sign); + return ensure_non_negative_zero(mag, sign); } // Divides the first i64 by the second i64. @@ -277,7 +277,7 @@ fn i64_div(a: i64, b: i64) -> i64 { if (sign == false) { // If the operands are positive, the quotient is simply their absolute value quotient. - return IntegerTrait::new(a.mag / b.mag, sign); + return ensure_non_negative_zero(a.mag / b.mag, sign); } // If the operands have different signs, rounding is necessary. @@ -287,7 +287,7 @@ fn i64_div(a: i64, b: i64) -> i64 { if (quotient == 0_u64) { return IntegerTrait::new(quotient, false); } - return IntegerTrait::new(quotient, sign); + return ensure_non_negative_zero(quotient, sign); } // If the quotient is not an integer, multiply the dividend by 10 to move the decimal point over. @@ -300,9 +300,9 @@ fn i64_div(a: i64, b: i64) -> i64 { // Check the last digit to determine rounding direction. if (last_digit <= 5_u64) { - return IntegerTrait::new(quotient / 10_u64, sign); + return ensure_non_negative_zero(quotient / 10_u64, sign); } else { - return IntegerTrait::new((quotient / 10_u64) + 1_u64, sign); + return ensure_non_negative_zero((quotient / 10_u64) + 1_u64, sign); } } @@ -426,7 +426,7 @@ fn i64_ge(a: i64, b: i64) -> bool { // * `i64` - The negation of `x`. fn i64_neg(x: i64) -> i64 { // The negation of an integer is obtained by flipping its sign. - return IntegerTrait::new(x.mag, !x.sign); + return ensure_non_negative_zero(x.mag, !x.sign); } /// Cf: IntegerTrait::abs docstring @@ -451,3 +451,11 @@ fn i64_min(a: i64, b: i64) -> i64 { return b; } } + +fn ensure_non_negative_zero(mag: u64, sign: bool) -> i64 { + if mag == 0 { + IntegerTrait::::new(mag, false) + } else { + IntegerTrait::::new(mag, sign) + } +} diff --git a/src/numbers/signed_integer/i8.cairo b/src/numbers/signed_integer/i8.cairo index 108a8ea6a..aed7c3338 100644 --- a/src/numbers/signed_integer/i8.cairo +++ b/src/numbers/signed_integer/i8.cairo @@ -222,7 +222,7 @@ fn i8_add(a: i8, b: i8) -> i8 { if (sum == 0_u8) { return IntegerTrait::new(sum, false); } - return IntegerTrait::new(sum, a.sign); + return ensure_non_negative_zero(sum, a.sign); } else { // If the integers have different signs, // the larger absolute value is subtracted from the smaller one. @@ -236,7 +236,7 @@ fn i8_add(a: i8, b: i8) -> i8 { if (difference == 0_u8) { return IntegerTrait::new(difference, false); } - return IntegerTrait::new(difference, larger.sign); + return ensure_non_negative_zero(difference, larger.sign); } } @@ -255,7 +255,7 @@ fn i8_sub(a: i8, b: i8) -> i8 { } // The subtraction of `a` to `b` is achieved by negating `b` sign and adding it to `a`. - let neg_b = IntegerTrait::new(b.mag, !b.sign); + let neg_b = ensure_non_negative_zero(b.mag, !b.sign); return a + neg_b; } @@ -282,7 +282,7 @@ fn i8_mul(a: i8, b: i8) -> i8 { return IntegerTrait::new(mag, false); } - return IntegerTrait::new(mag, sign); + return ensure_non_negative_zero(mag, sign); } // Divides the first i8 by the second i8. @@ -301,7 +301,7 @@ fn i8_div(a: i8, b: i8) -> i8 { if (sign == false) { // If the operands are positive, the quotient is simply their absolute value quotient. - return IntegerTrait::new(a.mag / b.mag, sign); + return ensure_non_negative_zero(a.mag / b.mag, sign); } // If the operands have different signs, rounding is necessary. @@ -311,7 +311,7 @@ fn i8_div(a: i8, b: i8) -> i8 { if (quotient == 0_u8) { return IntegerTrait::new(quotient, false); } - return IntegerTrait::new(quotient, sign); + return ensure_non_negative_zero(quotient, sign); } // If the quotient is not an integer, multiply the dividend by 10 to move the decimal point over. @@ -324,9 +324,9 @@ fn i8_div(a: i8, b: i8) -> i8 { // Check the last digit to determine rounding direction. if (last_digit <= 5_u8) { - return IntegerTrait::new(quotient / 10_u8, sign); + return ensure_non_negative_zero(quotient / 10_u8, sign); } else { - return IntegerTrait::new((quotient / 10_u8) + 1_u8, sign); + return ensure_non_negative_zero((quotient / 10_u8) + 1_u8, sign); } } @@ -450,7 +450,7 @@ fn i8_ge(a: i8, b: i8) -> bool { // * `i8` - The negation of `x`. fn i8_neg(x: i8) -> i8 { // The negation of an integer is obtained by flipping its sign. - return IntegerTrait::new(x.mag, !x.sign); + return ensure_non_negative_zero(x.mag, !x.sign); } /// Cf: IntegerTrait::abs docstring @@ -506,3 +506,10 @@ fn i8_to_fp16x16(x: i8) -> FixedType { FixedType { mag: x.mag.into() * ONE_fp16x16, sign: x.sign } } +fn ensure_non_negative_zero(mag: u8, sign: bool) -> i8 { + if mag == 0 { + IntegerTrait::::new(mag, false) + } else { + IntegerTrait::::new(mag, sign) + } +} \ No newline at end of file From b905026c6ca83b5f949dc174afb952fbb1f08721 Mon Sep 17 00:00:00 2001 From: raphaelDkhn <113879115+raphaelDkhn@users.noreply.github.com> Date: Thu, 24 Aug 2023 21:39:24 +0300 Subject: [PATCH 29/30] Fix Discord link --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index dbc13ff86..ae6a624c1 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ [![GitHub Workflow Status](https://github.com/gizatechxyz/orion/actions/workflows/test.yaml/badge.svg)](https://github.com/gizatechxyz/orion/actions/workflows/test.yaml) [![Project license](https://img.shields.io/github/license/gizatechxyz/orion.svg?style=flat-square)](LICENSE) [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/gizatechxyz/orion/issues?q=is%3Aissue+is%3Aopen) -[![Join the community](https://dcbadge.vercel.app/api/server/FR3Cd88x6r?style=flat-square)](https://discord.gg/FR3Cd88x6r) +[![Join the community](https://dcbadge.vercel.app/api/server/FR3Cd88x6r?style=flat-square)](https://discord.gg/kvqVYbCpU3) # Orion: An Open-source Framework for Validity and ZK ML ✨ @@ -46,7 +46,7 @@ For a detailed list of changes, please refer to the [CHANGELOG](https://github.c ## 💖 Join the community! -Join the community and help build a safer and transparent AI in our [Discord](https://discord.gg/Kt24CsMb5k)! +Join the community and help build a safer and transparent AI in our [Discord](https://discord.gg/kvqVYbCpU3)! ## ✍️ Authors & contributors From bec55afcc1491277292e18f5aa284250ad635d34 Mon Sep 17 00:00:00 2001 From: raphaelDkhn <113879115+raphaelDkhn@users.noreply.github.com> Date: Thu, 24 Aug 2023 21:57:39 +0300 Subject: [PATCH 30/30] Fix discord badge --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ae6a624c1..d44564954 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ [![GitHub Workflow Status](https://github.com/gizatechxyz/orion/actions/workflows/test.yaml/badge.svg)](https://github.com/gizatechxyz/orion/actions/workflows/test.yaml) [![Project license](https://img.shields.io/github/license/gizatechxyz/orion.svg?style=flat-square)](LICENSE) [![Pull Requests welcome](https://img.shields.io/badge/PRs-welcome-ff69b4.svg?style=flat-square)](https://github.com/gizatechxyz/orion/issues?q=is%3Aissue+is%3Aopen) -[![Join the community](https://dcbadge.vercel.app/api/server/FR3Cd88x6r?style=flat-square)](https://discord.gg/kvqVYbCpU3) +[![Join the community](https://dcbadge.vercel.app/api/server/kvqVYbCpU3?style=flat&compact=true)](https://discord.gg/kvqVYbCpU3) # Orion: An Open-source Framework for Validity and ZK ML ✨