Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement unsqueeze #222

Merged
merged 1 commit into from
Oct 2, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@
* [tensor.quantize\_linear](framework/operators/tensor/tensor.quantize\_linear.md)
* [tensor.dequantize\_linear](framework/operators/tensor/tensor.dequantize\_linear.md)
* [tensor.nonzero](framework/operators/tensor/tensor.nonzero.md)
* [tensor.unsqueeze](framework/operators/tensor/tensor.unsqueeze.md)
* [Neural Network](framework/operators/neural-network/README.md)
* [nn.relu](framework/operators/neural-network/nn.relu.md)
* [nn.leaky\_relu](framework/operators/neural-network/nn.leaky\_relu.md)
Expand Down
3 changes: 2 additions & 1 deletion docs/framework/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,5 +53,6 @@ You can see below the list of current supported ONNX Operators:
| [QuantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [DequantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [Nonzero](operators/tensor/tensor.nonzero.md) | :white\_check\_mark: |
| [Unsqueeze](operators/tensor/tensor.unsqueeze.md) | :white\_check\_mark: |

Current Operators support: **45/156 (29%)**
Current Operators support: **48/156 (30%)**
1 change: 1 addition & 0 deletions docs/framework/operators/tensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ use orion::operators::tensor::TensorTrait;
| [`tensor.dequantize_linear`](tensor.dequantize\_linear.md) | Dequantizes an i8 Tensor using linear dequantization. |
| [`tensor.gather`](tensor.gather.md) | Gather entries of the axis dimension of data. |
| [`tensor.nonzero`](tensor.nonzero.md) | Produces indices of the elements that are non-zero (in row-major order - by dimension). |
| [`tensor.unsqueeze`](tensor.unsqueeze.md) | Inserts single-dimensional entries to the shape of an input tensor. |

## Arithmetic Operations

Expand Down
51 changes: 51 additions & 0 deletions docs/framework/operators/tensor/tensor.unsqueeze.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# tensor.unsqueeze

```rust
fn unsqueeze(self: @Tensor<T>, axes: Span<usize>) -> Tensor<T>;
```

Insert single-dimensional entries to the shape of an input tensor (data). Takes one required input axes -
which contains a list of dimension indices and this operator will insert a dimension of value 1 into the
corresponding index of the output tensor (expanded).

## Args

* `self`(`@Tensor<T>`) - Tensor of data to unsquezee.
* `axes`(`Span<usize>`) - List of integers indicating the dimensions to be inserted.

## Panics

* Panics if the given axes have duplicate elements.
* Panics if one of the given axes is invalid.

## Returns

Reshaped `Tensor<T>` with same data as input.

## Example

```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};

fn unsqueeze_example() -> Tensor<u32> {
let tensor = TensorTrait::<u32>::new(
shape: array![2, 4].span(),
data: array![0, 1, 2, 3, 4, 5, 6, 7].span(),
);

return tensor.unsqueeze(
axes: array![0, 3].span(),
);
}
>>> [[[[0]
[1]
[2]
[3]]

[[4]
[5]
[6]
[7]]]]
```
175 changes: 175 additions & 0 deletions nodegen/node/unsqueeze.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@
import numpy as np
from nodegen.node import RunAll
from ..helpers import make_node, make_test, to_fp, Tensor, Dtype, FixedImpl


class Unsqueeze(RunAll):
@staticmethod
def unsqueeze_u32():
def unsqueeze_2D():
x = np.random.randint(0, 255, (2, 4)).astype(np.uint32)
y = np.expand_dims(x, axis=0)
y = np.expand_dims(y, axis=1)
y = np.expand_dims(y, axis=4)

x = Tensor(Dtype.U32, x.shape, x.flatten())
y = Tensor(Dtype.U32, y.shape, y.flatten())

name = "unsqueeze_u32_2d"
make_node([x], [y], name)
make_test(
[x], y, "input_0.unsqueeze(array![1, 4, 0].span())", name)

def unsqueeze_3D():
x = np.random.randint(0, 255, (20, 10, 5)).astype(np.uint32)
y = np.expand_dims(x, axis=2)
y = np.expand_dims(y, axis=4)
y = np.expand_dims(y, axis=5)

x = Tensor(Dtype.U32, x.shape, x.flatten())
y = Tensor(Dtype.U32, y.shape, y.flatten())

name = "unsqueeze_u32_3d"
make_node([x], [y], name)
make_test(
[x], y, "input_0.unsqueeze(array![5, 4, 2].span())", name)

unsqueeze_2D()
unsqueeze_3D()

@staticmethod
def unsqueeze_i32():
def unsqueeze_2D():
x = np.random.randint(-127, 127, (2, 4)).astype(np.int32)
y = np.expand_dims(x, axis=0)
y = np.expand_dims(y, axis=1)
y = np.expand_dims(y, axis=4)

x = Tensor(Dtype.I32, x.shape, x.flatten())
y = Tensor(Dtype.I32, y.shape, y.flatten())

name = "unsqueeze_i32_2d"
make_node([x], [y], name)
make_test(
[x], y, "input_0.unsqueeze(array![1, 4, 0].span())", name)

def unsqueeze_3D():
x = np.random.randint(-127, 127, (20, 10, 5)).astype(np.int32)
y = np.expand_dims(x, axis=2)
y = np.expand_dims(y, axis=4)
y = np.expand_dims(y, axis=5)

x = Tensor(Dtype.I32, x.shape, x.flatten())
y = Tensor(Dtype.I32, y.shape, y.flatten())

name = "unsqueeze_i32_3d"
make_node([x], [y], name)
make_test(
[x], y, "input_0.unsqueeze(array![5, 4, 2].span())", name)


unsqueeze_2D()
unsqueeze_3D()

@staticmethod
def unsqueeze_i8():
def unsqueeze_2D():
x = np.random.randint(-127, 127, (2, 4)).astype(np.int8)
y = np.expand_dims(x, axis=0)
y = np.expand_dims(y, axis=1)
y = np.expand_dims(y, axis=4)

x = Tensor(Dtype.I8, x.shape, x.flatten())
y = Tensor(Dtype.I8, y.shape, y.flatten())

name = "unsqueeze_i8_2d"
make_node([x], [y], name)
make_test(
[x], y, "input_0.unsqueeze(array![1, 4, 0].span())", name)

def unsqueeze_3D():
x = np.random.randint(-127, 127, (20, 10, 5)).astype(np.int8)
y = np.expand_dims(x, axis=2)
y = np.expand_dims(y, axis=4)
y = np.expand_dims(y, axis=5)

x = Tensor(Dtype.I8, x.shape, x.flatten())
y = Tensor(Dtype.I8, y.shape, y.flatten())

name = "unsqueeze_i8_3d"
make_node([x], [y], name)
make_test(
[x], y, "input_0.unsqueeze(array![5, 4, 2].span())", name)

unsqueeze_2D()
unsqueeze_3D()

@staticmethod
def unsqueeze_fp8x23():
def unsqueeze_2D():
x = to_fp(np.random.randint(-127, 127, (2, 4)
).astype(np.int64), FixedImpl.FP8x23)
y = np.expand_dims(x, axis=0)
y = np.expand_dims(y, axis=1)
y = np.expand_dims(y, axis=4)

x = Tensor(Dtype.FP8x23, x.shape, x.flatten())
y = Tensor(Dtype.FP8x23, y.shape, y.flatten())

name = "unsqueeze_fp8x23_2d"
make_node([x], [y], name)
make_test(
[x], y, "input_0.unsqueeze(array![1, 4, 0].span())", name)

def unsqueeze_3D():
x = to_fp(np.random.randint(-127, 127, (20, 10, 5)
).astype(np.int64), FixedImpl.FP8x23)
y = np.expand_dims(x, axis=2)
y = np.expand_dims(y, axis=4)
y = np.expand_dims(y, axis=5)

x = Tensor(Dtype.FP8x23, x.shape, x.flatten())
y = Tensor(Dtype.FP8x23, y.shape, y.flatten())

name = "unsqueeze_fp8x23_3d"
make_node([x], [y], name)
make_test(
[x], y, "input_0.unsqueeze(array![5, 4, 2].span())", name)

unsqueeze_2D()
unsqueeze_3D()

@staticmethod
def unsqueeze_fp16x16():
def unsqueeze_2D():
x = to_fp(np.random.randint(-127, 127, (2, 4)
).astype(np.int64), FixedImpl.FP16x16)
y = np.expand_dims(x, axis=0)
y = np.expand_dims(y, axis=1)
y = np.expand_dims(y, axis=4)

x = Tensor(Dtype.FP16x16, x.shape, x.flatten())
y = Tensor(Dtype.FP16x16, y.shape, y.flatten())

name = "unsqueeze_fp16x16_2d"
make_node([x], [y], name)
make_test(
[x], y, "input_0.unsqueeze(array![1, 4, 0].span())", name)

def unsqueeze_3D():
x = to_fp(np.random.randint(-127, 127, (20, 10, 5)
).astype(np.int64), FixedImpl.FP16x16)
y = np.expand_dims(x, axis=2)
y = np.expand_dims(y, axis=4)
y = np.expand_dims(y, axis=5)

x = Tensor(Dtype.FP16x16, x.shape, x.flatten())
y = Tensor(Dtype.FP16x16, y.shape, y.flatten())

name = "unsqueeze_fp16x16_3d"
make_node([x], [y], name)
make_test(
[x], y, "input_0.unsqueeze(array![5, 4, 2].span())", name)

unsqueeze_2D()
unsqueeze_3D()
98 changes: 96 additions & 2 deletions src/operators/tensor/core.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,8 @@ impl TensorSerde<T, impl TSerde: Serde<T>, impl TDrop: Drop<T>> of Serde<Tensor<
/// dequantize_linear - Dequantizes an i8 Tensor using linear dequantization.
/// gather - Gather entries of the axis dimension of data.
/// nonzero - Produces indices of the elements that are non-zero (in row-major order - by dimension).
///
/// unsqueeze - Inserts single-dimensional entries to the shape of an input tensor.
///
trait TensorTrait<T> {
/// # tensor.new
///
Expand Down Expand Up @@ -2502,6 +2503,59 @@ trait TensorTrait<T> {
fn gather(
self: @Tensor<T>, indices: Tensor<usize>, axis: Option<usize>
) -> Tensor<T> ;
/// # tensor.unsqueeze
///
/// ```rust
/// fn unsqueeze(self: @Tensor<T>, axes: Span<usize>) -> Tensor<T>;
/// ```
///
/// Insert single-dimensional entries to the shape of an input tensor (data). Takes one required input axes -
/// which contains a list of dimension indices and this operator will insert a dimension of value 1 into the
/// corresponding index of the output tensor (expanded).
///
/// ## Args
///
/// * `self`(`@Tensor<T>`) - Tensor of data to unsquezee.
/// * `axes`(`Span<usize>`) - List of integers indicating the dimensions to be inserted.
///
/// ## Panics
///
/// * Panics if the given axes have duplicate elements.
/// * Panics if one of the given axes is invalid.
///
/// ## Returns
///
/// Reshaped `Tensor<T>` with same data as input.
///
/// ## Example
///
/// ```rust
/// use array::{ArrayTrait, SpanTrait};
///
/// use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};
///
/// fn unsqueeze_example() -> Tensor<u32> {
/// let tensor = TensorTrait::<u32>::new(
/// shape: array![2, 4].span(),
/// data: array![0, 1, 2, 3, 4, 5, 6, 7].span(),
/// );
///
/// return tensor.unsqueeze(
/// axes: array![0, 3].span(),
/// );
/// }
/// >>> [[[[0]
/// [1]
/// [2]
/// [3]]
///
/// [[4]
/// [5]
/// [6]
/// [7]]]]
/// ```
///
fn unsqueeze(self: @Tensor<T>, axes: Span<usize>) -> Tensor<T>;
}


Expand Down Expand Up @@ -2874,4 +2928,44 @@ fn nonzero<T, MAG, impl TTensor: TensorTrait<T>, impl TPartialEq: PartialEq<T>,
};

return Tensor::<usize> {shape: array![(*self.shape).len(), stop_k + 1].span(), data: output_data.span()};
}
}

/// Cf: TensorTrait::unsqueeze docstring
fn unsqueeze<T>(self: @Tensor<T>, axes: Span<usize>) -> Tensor<T> {
let dedupped_array = axes.dedup();
assert(dedupped_array.len() == axes.len(), 'Duplicated input axes');

let mut self_shape_copy = *self.shape;
let mut i: usize = 0;
let mut added_axes_count: usize = 0;
let mut output_shape: Array<usize> = ArrayTrait::new();
loop {
if axes.contains(i + added_axes_count) {
output_shape.append(1);
added_axes_count += 1;
} else {
match self_shape_copy.pop_front() {
Option::Some(val) => {
output_shape.append(*val);
i += 1;
},
Option::None(_) => {
break ();
}
};
};
};

let mut j: usize = output_shape.len();
loop {
if axes.contains(j) {
output_shape.append(1);
} else {
break ();
}
j += 1;
};
assert(output_shape.len() == axes.len() + (*self.shape).len(), 'Invalid input axes');

return Tensor::<T> { shape: output_shape.span(), data: *self.data };
}
7 changes: 7 additions & 0 deletions src/operators/tensor/implementations/tensor_fp16x16.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,13 @@ impl FP16x16Tensor of TensorTrait<FP16x16> {
fn nonzero(self: @Tensor<FP16x16>) -> Tensor<usize> {
core::nonzero(self)
}

fn unsqueeze(
self: @Tensor<FP16x16>,
axes: Span<usize>
) -> Tensor<FP16x16> {
core::unsqueeze(self, axes)
}
}

/// Implements addition for `Tensor<FP16x16>` using the `Add` trait.
Expand Down
7 changes: 7 additions & 0 deletions src/operators/tensor/implementations/tensor_fp32x32.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -214,6 +214,13 @@ impl FP32x32Tensor of TensorTrait<FP32x32> {
fn nonzero(self: @Tensor<FP32x32>) -> Tensor<usize> {
core::nonzero(self)
}

fn unsqueeze(
self: @Tensor<FP32x32>,
axes: Span<usize>
) -> Tensor<FP32x32> {
core::unsqueeze(self, axes)
}
}

/// Implements addition for `Tensor<FP32x32>` using the `Add` trait.
Expand Down
Loading