Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: lp pool #634

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,5 @@ Orion supports currently only fixed point data types for `TreeEnsembleTrait`.

| function | description |
| --- | --- |
| [`tree_ensemble.predict`](tree_ensemble.predict.md) | Returns the regressed values for each input in a batch. |
| [`tree_ensemble.predict`](tree_ensemble.predict.md) | Returns the regressed values for each input in a batch. |

Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,23 @@

```rust
fn predict(X: @Tensor<T>,
nodes_splits: Tensor<T>,
nodes_featureids: Span<usize>,
nodes_modes: Span<MODE>,
nodes_truenodeids: Span<usize>,
nodes_falsenodeids: Span<usize>,
nodes_trueleafs: Span<usize>,
nodes_falseleafs: Span<usize>,
leaf_targetids: Span<usize>,
leaf_weights: Tensor<T>,
tree_roots: Span<usize>,
post_transform: POST_TRANSFORM,
aggregate_function: AGGREGATE_FUNCTION,
nodes_hitrates: Option<Tensor<T>>,
nodes_missing_value_tracks_true: Option<Span<usize>>,
membership_values: Option<Tensor<T>>,
n_targets: usize
) -> MutMatrix::<T>;
nodes_splits: Tensor<T>,
nodes_featureids: Span<usize>,
nodes_modes: Span<MODE>,
nodes_truenodeids: Span<usize>,
nodes_falsenodeids: Span<usize>,
nodes_trueleafs: Span<usize>,
nodes_falseleafs: Span<usize>,
leaf_targetids: Span<usize>,
leaf_weights: Tensor<T>,
tree_roots: Span<usize>,
post_transform: POST_TRANSFORM,
aggregate_function: AGGREGATE_FUNCTION,
nodes_hitrates: Option<Tensor<T>>,
nodes_missing_value_tracks_true: Option<Span<usize>>,
membership_values: Option<Tensor<T>>,
n_targets: usize
) -> MutMatrix::<T>;
```

Tree Ensemble operator. Returns the regressed values for each input in a batch. Inputs have dimensions [N, F] where N is the input batch size and F is the number of input features. Outputs have dimensions [N, num_targets] where N is the batch size and num_targets is the number of targets, which is a configurable attribute.
Expand Down Expand Up @@ -50,7 +50,7 @@ Tree Ensemble operator. Returns the regressed values for each input in a batch.

## Type Constraints

`TreeEnsembleClassifier` and `X` must be fixed points
`T` must be fixed point

## Examples

Expand Down
90 changes: 90 additions & 0 deletions docs/framework/operators/neural-network/nn.lp_pool.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# NNTrait::lp_pool

```rust
fn lp_pool(
X: @Tensor<T>,
auto_pad: Option<AUTO_PAD>,
ceil_mode: Option<usize>,
dilations: Option<Span<usize>>,
kernel_shape: Span<usize>,
p: Option<usize>,
pads: Option<Span<usize>>,
strides: Option<Span<usize>>,
count_include_pad: Option<usize>,
) -> Tensor<T>;
```

LpPool consumes an input tensor X and applies Lp pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. Lp pooling consisting of computing the Lp norm on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing.

## Args

* `X`(`@Tensor<T>`) - Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size.
* `auto_pad`(`Option<AUTO_PAD>`) - Default is NOTSET, auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. NOTSET means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that `output_shape[i] = ceil(input_shape[i] / strides[i])` for each axis `i`.
* `ceil_mode`(`Option<usize>`) - Default is 0, Whether to use ceil or floor (default) to compute the output shape.
* `dilations`(`Option<Span<usize>>`) - Dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each spatial axis.
* `kernel_shape`(`Span<usize>`) - The size of the kernel along each axis.
* `p`(`Option<usize>`) - Default is 2, p value of the Lp norm used to pool over the input data.
* `pads`(`Option<Span<usize>>`) - Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. `pads` format should be as follow [x1_begin, x2_begin...x1_end, x2_end,...], where xi_begin the number of pixels added at the beginning of axis `i` and xi_end, the number of pixels added at the end of axis `i`. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis.
* `strides`(`Option<Span<usize>>`) - Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis.
* `count_include_pad`(`Option<usize>`) - Default is 0, Whether include pad pixels when calculating values for the edges. Default is 0, doesn't count include pad.

## Returns

A `Tensor<T>` - output data tensor from Lp pooling across the input tensor.

## Examples

```rust
use orion::operators::nn::NNTrait;
use orion::numbers::FixedTrait;
use orion::operators::nn::FP16x16NN;
use orion::numbers::FP16x16;
use orion::operators::tensor::{Tensor, TensorTrait, FP16x16Tensor};


fn lp_pool_example() -> Tensor<FP16x16> {
let mut shape = ArrayTrait::<usize>::new();
shape.append(1);
shape.append(1);
shape.append(4);
shape.append(4);

let mut data = ArrayTrait::new();
data.append(FP16x16 { mag: 65536, sign: false });
data.append(FP16x16 { mag: 131072, sign: false });
data.append(FP16x16 { mag: 196608, sign: false });
data.append(FP16x16 { mag: 262144, sign: false });
data.append(FP16x16 { mag: 327680, sign: false });
data.append(FP16x16 { mag: 393216, sign: false });
data.append(FP16x16 { mag: 458752, sign: false });
data.append(FP16x16 { mag: 524288, sign: false });
data.append(FP16x16 { mag: 589824, sign: false });
data.append(FP16x16 { mag: 655360, sign: false });
data.append(FP16x16 { mag: 720896, sign: false });
data.append(FP16x16 { mag: 786432, sign: false });
data.append(FP16x16 { mag: 851968, sign: false });
data.append(FP16x16 { mag: 917504, sign: false });
data.append(FP16x16 { mag: 983040, sign: false });
data.append(FP16x16 { mag: 1048576, sign: false });
let mut X = TensorTrait::new(shape.span(), data.span());

return NNTrait::lp_pool(
@X,
Option::None,
Option::None,
Option::None,
array![2, 2].span(),
Option::Some(2),
Option::None,
Option::Some(array![1, 1].span()),
Option::None,
);
}


>>> [[[[ 8.124039, 9.899495, 11.74734 ],
[15.556349, 17.492855, 19.442223],
[23.366642, 25.337719, 27.313 ]]]]


````
1 change: 1 addition & 0 deletions docs/framework/operators/tensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,7 @@ use orion::operators::tensor::TensorTrait;
| [`tensor.optional`](tensor.optional.md) | Constructs an optional-type value containing either an empty optional of a certain type specified by the attribute, or a non-empty value containing the input element. |
| [`tensor.dynamic_quantize_linear`](tensor.dynamic\_quantize\_linear.md) | Computes the Scale, Zero Point and FP32->8Bit conversion of FP32 Input data. |
| [`tensor.scatter_nd`](tensor.scatter\_nd.md) | The output of the operation is produced by creating a copy of the input data, and then updating its value to values specified by updates at specific index positions specified by indices. Its output shape is the same as the shape of data |
| [`tensor.center_crop_pad`](tensor.center\_crop\_pad.md) | Center crop or pad an input to given dimensions. |
| [`tensor.label_encoder`](tensor.label\_encoder.md) | Maps each element in the input tensor to another value. |
| [`tensor.bit_shift`](tensor.bit\_shift.md) | Bitwise shift operator performs element-wise operation. For each input element, if the attribute "direction" is "RIGHT", this operator moves its binary representation toward the right side so that the input value is effectively decreased. If the attribute "direction" is "LEFT", bits of binary representation moves toward the left side, which results the increase of its actual value. |
| [`tensor.eye_like`](tensor.eye\_like.md) | Generate a 2D tensor (matrix) with ones on the diagonal and zeros everywhere else. Only 2D tensors are supported, i.e. input T1 must be of rank 2. The shape of the output tensor is the same as the input tensor. |
Expand Down
2 changes: 1 addition & 1 deletion docs/framework/operators/tensor/tensor.qlinear_conv.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ It means they must be either scalars (per tensor) or 1-D tensors (per output cha

## Args

* `X`(`@Tensor<i8>`) - Quantized input data tensor, has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 ... x Dn).
* `self`(`@Tensor<i8>`) - Quantized input data tensor, has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 ... x Dn).
* `X_scale`(`@Tensor<T>`) - Scale for input `X`.
* `X_zero_point`(`@Tensor<T>`) - Zero point for input `X`.
* `W`(`@Tensor<i8>`) - Quantized weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x ... x kn), where (k1 x k2 x ... kn) is the dimension of the kernel.
Expand Down
Loading
Loading