Skip to content

Commit

Permalink
Merge pull request #400 from Dl-Vv/feat-and
Browse files Browse the repository at this point in the history
Feat: And
  • Loading branch information
raphaelDkhn authored Oct 25, 2023
2 parents d746e23 + 1245027 commit 81c9abb
Show file tree
Hide file tree
Showing 65 changed files with 1,660 additions and 57 deletions.
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@
* [tensor.sign](framework/operators/tensor/tensor.sign.md)
* [tensor.clip](framework/operators/tensor/tensor.clip.md)
* [tensor.identity](framework/operators/tensor/tensor.identity.md)
* [tensor.and](framework/operators/tensor/tensor.and.md)
* [Neural Network](framework/operators/neural-network/README.md)
* [nn.relu](framework/operators/neural-network/nn.relu.md)
* [nn.leaky\_relu](framework/operators/neural-network/nn.leaky\_relu.md)
Expand Down
115 changes: 58 additions & 57 deletions docs/framework/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,62 +4,63 @@ To see the full list of available ONNX Operators refer to [this table](https://g

You can see below the list of current supported ONNX Operators:

| Operator | Implemented |
| :-------------------------------------------------------------: | :------------------: |
| [MatMul](operators/tensor/tensor.matmul.md) | :white\_check\_mark: |
| [MatMulInteger](operators/tensor/tensor.matmul.md) | :white\_check\_mark: |
| [Add](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Sub](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Mul](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Div](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Equal](operators/tensor/tensor.equal.md) | :white\_check\_mark: |
| [Greater](operators/tensor/tensor.greater.md) | :white\_check\_mark: |
| [GreaterOrEqual](operators/tensor/tensor.greater\_equal.md) | :white\_check\_mark: |
| [Less](operators/tensor/tensor.less.md) | :white\_check\_mark: |
| [LessOrEqual](operators/tensor/tensor.less\_equal.md) | :white\_check\_mark: |
| [Abs](operators/tensor/tensor.abs.md) | :white\_check\_mark: |
| [Neg](operators/tensor/tensor.neg.md) | :white\_check\_mark: |
| [Ceil](operators/tensor/tensor.ceil.md) | :white\_check\_mark: |
| [Exp](operators/tensor/tensor.exp.md) | :white\_check\_mark: |
| [Ln](operators/tensor/tensor.log.md) | :white\_check\_mark: |
| [Reshape](operators/tensor/tensor.reshape.md) | :white\_check\_mark: |
| [Transpose](operators/tensor/tensor.transpose.md) | :white\_check\_mark: |
| [ArgMax](operators/tensor/tensor.argmax.md) | :white\_check\_mark: |
| [ArgMin](operators/tensor/tensor.argmin.md) | :white\_check\_mark: |
| [ReduceSum](operators/tensor/tensor.reduce\_sum.md) | :white\_check\_mark: |
| [CumSum](operators/tensor/tensor.cumsum.md) | :white\_check\_mark: |
| [Cos](operators/tensor/tensor.cos.md) | :white\_check\_mark: |
| [Sin](operators/tensor/tensor.sin.md) | :white\_check\_mark: |
| [Asin](operators/tensor/tensor.asin.md) | :white\_check\_mark: |
| [Flatten](operators/tensor/tensor.flatten.md) | :white\_check\_mark: |
| [Relu](operators/neural-network/nn.relu.md) | :white\_check\_mark: |
| [LeakyRelu](operators/neural-network/nn.leaky\_relu.md) | :white\_check\_mark: |
|[ThresholdedRelu](operators/neural-network/nn.thresholded\_relu.md)| :white\_check\_mark: |
| [Sigmoid](operators/neural-network/nn.sigmoid.md) | :white\_check\_mark: |
| [Softmax](operators/neural-network/nn.softmax.md) | :white\_check\_mark: |
| [LogSoftmax](operators/neural-network/nn.logsoftmax.md) | :white\_check\_mark: |
| [Softsign](operators/neural-network/nn.softsign.md) | :white\_check\_mark: |
| [Softplus](operators/neural-network/nn.softplus.md) | :white\_check\_mark: |
| [Linear](operators/neural-network/nn.linear.md) | :white\_check\_mark: |
| [HardSigmoid](operators/neural-network/nn.hard\_sigmoid.md) | :white\_check\_mark: |
| [Sinh](operators/tensor/tensor.sinh.md) | :white\_check\_mark: |
| [Asinh](operators/tensor/tensor.asinh.md) | :white\_check\_mark: |
| [Cosh](operators/tensor/tensor.cosh.md) | :white\_check\_mark: |
| [ACosh](operators/tensor/tensor.acosh.md) | :white\_check\_mark: |
| [Tanh](operators/tensor/tensor.tanh.md) | :white\_check\_mark: |
| [Acos](operators/tensor/tensor.acos.md) | :white\_check\_mark: |
| [Sqrt](operators/tensor/tensor.sqrt.md) | :white\_check\_mark: |
| [Onehot](operators/tensor/tensor.onehot.md) | :white\_check\_mark: |
| [Slice](operators/tensor/tensor.slice.md) | :white\_check\_mark: |
| [Concat](operators/tensor/tensor.concat.md) | :white\_check\_mark: |
| [Gather](operators/tensor/tensor.gather.md) | :white\_check\_mark: |
| [QuantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [DequantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [Nonzero](operators/tensor/tensor.nonzero.md) | :white\_check\_mark: |
| [Squeeze](operators/tensor/tensor.squeeze.md) | :white\_check\_mark: |
| [Unsqueeze](operators/tensor/tensor.unsqueeze.md) | :white\_check\_mark: |
| [Sign](operators/tensor/tensor.sign.md) | :white\_check\_mark: |
| [Clip](operators/tensor/tensor.clip.md) | :white\_check\_mark: |
| [Identity](operators/tensor/tensor.identity.md) | :white\_check\_mark: |
| Operator | Implemented |
| :-----------------------------------------------------------------: | :------------------: |
| [MatMul](operators/tensor/tensor.matmul.md) | :white\_check\_mark: |
| [MatMulInteger](operators/tensor/tensor.matmul.md) | :white\_check\_mark: |
| [Add](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Sub](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Mul](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Div](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Equal](operators/tensor/tensor.equal.md) | :white\_check\_mark: |
| [Greater](operators/tensor/tensor.greater.md) | :white\_check\_mark: |
| [GreaterOrEqual](operators/tensor/tensor.greater\_equal.md) | :white\_check\_mark: |
| [Less](operators/tensor/tensor.less.md) | :white\_check\_mark: |
| [LessOrEqual](operators/tensor/tensor.less\_equal.md) | :white\_check\_mark: |
| [Abs](operators/tensor/tensor.abs.md) | :white\_check\_mark: |
| [Neg](operators/tensor/tensor.neg.md) | :white\_check\_mark: |
| [Ceil](operators/tensor/tensor.ceil.md) | :white\_check\_mark: |
| [Exp](operators/tensor/tensor.exp.md) | :white\_check\_mark: |
| [Ln](operators/tensor/tensor.log.md) | :white\_check\_mark: |
| [Reshape](operators/tensor/tensor.reshape.md) | :white\_check\_mark: |
| [Transpose](operators/tensor/tensor.transpose.md) | :white\_check\_mark: |
| [ArgMax](operators/tensor/tensor.argmax.md) | :white\_check\_mark: |
| [ArgMin](operators/tensor/tensor.argmin.md) | :white\_check\_mark: |
| [ReduceSum](operators/tensor/tensor.reduce\_sum.md) | :white\_check\_mark: |
| [CumSum](operators/tensor/tensor.cumsum.md) | :white\_check\_mark: |
| [Cos](operators/tensor/tensor.cos.md) | :white\_check\_mark: |
| [Sin](operators/tensor/tensor.sin.md) | :white\_check\_mark: |
| [Asin](operators/tensor/tensor.asin.md) | :white\_check\_mark: |
| [Flatten](operators/tensor/tensor.flatten.md) | :white\_check\_mark: |
| [Relu](operators/neural-network/nn.relu.md) | :white\_check\_mark: |
| [LeakyRelu](operators/neural-network/nn.leaky\_relu.md) | :white\_check\_mark: |
| [ThresholdedRelu](operators/neural-network/nn.thresholded\_relu.md) | :white\_check\_mark: |
| [Sigmoid](operators/neural-network/nn.sigmoid.md) | :white\_check\_mark: |
| [Softmax](operators/neural-network/nn.softmax.md) | :white\_check\_mark: |
| [LogSoftmax](operators/neural-network/nn.logsoftmax.md) | :white\_check\_mark: |
| [Softsign](operators/neural-network/nn.softsign.md) | :white\_check\_mark: |
| [Softplus](operators/neural-network/nn.softplus.md) | :white\_check\_mark: |
| [Linear](operators/neural-network/nn.linear.md) | :white\_check\_mark: |
| [HardSigmoid](operators/neural-network/nn.hard\_sigmoid.md) | :white\_check\_mark: |
| [Sinh](operators/tensor/tensor.sinh.md) | :white\_check\_mark: |
| [Asinh](operators/tensor/tensor.asinh.md) | :white\_check\_mark: |
| [Cosh](operators/tensor/tensor.cosh.md) | :white\_check\_mark: |
| [ACosh](operators/tensor/tensor.acosh.md) | :white\_check\_mark: |
| [Tanh](operators/tensor/tensor.tanh.md) | :white\_check\_mark: |
| [Acos](operators/tensor/tensor.acos.md) | :white\_check\_mark: |
| [Sqrt](operators/tensor/tensor.sqrt.md) | :white\_check\_mark: |
| [Onehot](operators/tensor/tensor.onehot.md) | :white\_check\_mark: |
| [Slice](operators/tensor/tensor.slice.md) | :white\_check\_mark: |
| [Concat](operators/tensor/tensor.concat.md) | :white\_check\_mark: |
| [Gather](operators/tensor/tensor.gather.md) | :white\_check\_mark: |
| [QuantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [DequantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [Nonzero](operators/tensor/tensor.nonzero.md) | :white\_check\_mark: |
| [Squeeze](operators/tensor/tensor.squeeze.md) | :white\_check\_mark: |
| [Unsqueeze](operators/tensor/tensor.unsqueeze.md) | :white\_check\_mark: |
| [Sign](operators/tensor/tensor.sign.md) | :white\_check\_mark: |
| [Clip](operators/tensor/tensor.clip.md) | :white\_check\_mark: |
| [Identity](operators/tensor/tensor.identity.md) | :white\_check\_mark: |
| [And](operators/tensor/tensor.and.md) | :white\_check\_mark: |

Current Operators support: **51/156 (33%)**
1 change: 1 addition & 0 deletions docs/framework/operators/tensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ use orion::operators::tensor::TensorTrait;
| [`tensor.unsqueeze`](tensor.unsqueeze.md) | Inserts single-dimensional entries to the shape of an input tensor. |
| [`tensor.sign`](tensor.sign.md) | Calculates the sign of the given input tensor element-wise. |
| [`tensor.clip`](tensor.clip.md) | Clip operator limits the given input within an interval. |
| [`tensor.and`](tensor.and.md) | Computes the logical AND of two tensors element-wise. |
| [`tensor.identity`](tensor.identity.md) | Return a Tensor with the same shape and contents as input. |

## Arithmetic Operations
Expand Down
67 changes: 67 additions & 0 deletions docs/framework/operators/tensor/tensor.and.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
#tensor.and

```rust
fn and(self: @Tensor<T>, other: @Tensor<T>) -> Tensor<usize>;
```

Computes the logical AND of two tensors element-wise.
The input tensors must have either:
* Exactly the same shape
* The same number of dimensions and the length of each dimension is either a common length or 1.

## Args

* `self`(`@Tensor<T>`) - The first tensor to be compared
* `other`(`@Tensor<T>`) - The second tensor to be compared

## Panics

* Panics if the shapes are not equal or broadcastable

## Returns

A new `Tensor<usize>` of booleans (0 or 1) with the same shape as the broadcasted inputs.

## Examples

Case 1: Compare tensors with same shape

```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};

fn and_example() -> Tensor<usize> {
let tensor_1 = TensorTrait::<u32>::new(
shape: array![3, 3].span(), data: array![0, 1, 2, 3, 4, 5, 6, 7, 8].span(),
);

let tensor_2 = TensorTrait::<u32>::new(
shape: array![3, 3].span(), data: array![0, 1, 2, 0, 1, 2, 0, 1, 2].span(),
);

return tensor_1.and(@tensor_2);
}
>>> [0,1,1,0,1,1,0,1,1]
```

Case 2: Compare tensors with different shapes

```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};

fn and_example() -> Tensor<usize> {
let tensor_1 = TensorTrait::<u32>::new(
shape: array![3, 3].span(), data: array![0, 1, 2, 3, 4, 5, 6, 7, 8].span(),
);

let tensor_2 = TensorTrait::<u32>::new(
shape: array![1, 3].span(), data: array![0, 1, 2].span(),
);

return tensor_1.and(@tensor_2);
}
>>> [0,1,1,0,1,1,0,1,1]
```
168 changes: 168 additions & 0 deletions nodegen/node/and.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
import numpy as np
from nodegen.node import RunAll
from ..helpers import make_node, make_test, to_fp, Tensor, Dtype, FixedImpl


class And(RunAll):
@staticmethod
def and_u32():
def default():
x = np.random.randint(0, 6, (3, 3, 3)).astype(np.uint32)
y = np.random.randint(0, 6, (3, 3, 3)).astype(np.uint32)
z = np.logical_and(x, y)

x = Tensor(Dtype.U32, x.shape, x.flatten())
y = Tensor(Dtype.U32, y.shape, y.flatten())
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "and_u32"
make_node([x, y], [z], name)
make_test([x, y], z, "input_0.and(@input_1)", name)

def broadcast():
x = np.random.randint(0, 6, (2, 2)).astype(np.uint32)
y = np.random.randint(0, 6, (1, 2)).astype(np.uint32)
z = np.logical_and(x, y)

x = Tensor(Dtype.U32, x.shape, x.flatten())
y = Tensor(Dtype.U32, y.shape, y.flatten())
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "and_u32_broadcast"
make_node([x, y], [z], name)
make_test([x, y], z, "input_0.and(@input_1)", name)

default()
broadcast()

@staticmethod
def and_i32():
def default():
x = np.random.randint(-3, 3, (3, 3, 3)).astype(np.int32)
y = np.random.randint(-3, 3, (3, 3, 3)).astype(np.int32)
z = np.logical_and(x, y)

x = Tensor(Dtype.I32, x.shape, x.flatten())
y = Tensor(Dtype.I32, y.shape, y.flatten())
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "and_i32"
make_node([x, y], [z], name)
make_test([x, y], z, "input_0.and(@input_1)", name)

def broadcast():
x = np.random.randint(-3, 3, (2, 2)).astype(np.int32)
y = np.random.randint(-3, 3, (1, 2)).astype(np.int32)
z = np.logical_and(x, y)

x = Tensor(Dtype.I32, x.shape, x.flatten())
y = Tensor(Dtype.I32, y.shape, y.flatten())
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "and_i32_broadcast"
make_node([x, y], [z], name)
make_test([x, y], z, "input_0.and(@input_1)", name)

default()
broadcast()

@staticmethod
def and_i8():
def default():
x = np.random.randint(-3, 3, (3, 3, 3)).astype(np.int8)
y = np.random.randint(-3, 3, (3, 3, 3)).astype(np.int8)
z = np.logical_and(x, y)

x = Tensor(Dtype.I8, x.shape, x.flatten())
y = Tensor(Dtype.I8, y.shape, y.flatten())
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "and_i8"
make_node([x, y], [z], name)
make_test([x, y], z, "input_0.and(@input_1)", name)

def broadcast():
x = np.random.randint(-3, 3, (2, 2)).astype(np.int8)
y = np.random.randint(-3, 3, (1, 2)).astype(np.int8)
z = np.logical_and(x, y)

x = Tensor(Dtype.I8, x.shape, x.flatten())
y = Tensor(Dtype.I8, y.shape, y.flatten())
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "and_i8_broadcast"
make_node([x, y], [z], name)
make_test([x, y], z, "input_0.and(@input_1)", name)

default()
broadcast()

@staticmethod
def and_fp8x23():
def default():
x = np.random.randint(-3, 3, (3, 3, 3)).astype(np.float64)
y = np.random.randint(-3, 3, (3, 3, 3)).astype(np.float64)
z = np.logical_and(x, y)

x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "and_fp8x23"
make_node([x, y], [z], name)
make_test([x, y], z, "input_0.and(@input_1)", name)

def broadcast():
x = np.random.randint(-3, 3, (2, 2)).astype(np.float64)
y = np.random.randint(-3, 3, (1, 2)).astype(np.float64)
z = np.logical_and(x, y)

x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "and_fp8x23_broadcast"
make_node([x, y], [z], name)
make_test([x, y], z, "input_0.and(@input_1)", name)

default()
broadcast()

@staticmethod
def and_fp16x16():
def default():
x = np.random.randint(-3, 3, (3, 3, 3)).astype(np.float64)
y = np.random.randint(-3, 3, (3, 3, 3)).astype(np.float64)
z = np.logical_and(x, y)

x = Tensor(Dtype.FP16x16, x.shape, to_fp(
x.flatten(), FixedImpl.FP16x16))
y = Tensor(Dtype.FP16x16, y.shape, to_fp(
y.flatten(), FixedImpl.FP16x16))
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "and_fp16x16"
make_node([x, y], [z], name)
make_test([x, y], z, "input_0.and(@input_1)", name)

def broadcast():
x = np.random.randint(-3, 3, (2, 2)).astype(np.float64)
y = np.random.randint(-3, 3, (1, 2)).astype(np.float64)
z = np.logical_and(x, y)

x = Tensor(Dtype.FP16x16, x.shape, to_fp(
x.flatten(), FixedImpl.FP16x16))
y = Tensor(Dtype.FP16x16, y.shape, to_fp(
y.flatten(), FixedImpl.FP16x16))
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "and_fp16x16_broadcast"
make_node([x, y], [z], name)
make_test([x, y], z, "input_0.and(@input_1)", name)

default()
broadcast()
Loading

0 comments on commit 81c9abb

Please sign in to comment.