Skip to content

Commit

Permalink
add AND in fp16x16w and fp8x23w + missing doc
Browse files Browse the repository at this point in the history
  • Loading branch information
raphaelDkhn committed Oct 25, 2023
1 parent e97c14f commit 1245027
Show file tree
Hide file tree
Showing 7 changed files with 93 additions and 57 deletions.
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@
* [tensor.sign](framework/operators/tensor/tensor.sign.md)
* [tensor.clip](framework/operators/tensor/tensor.clip.md)
* [tensor.identity](framework/operators/tensor/tensor.identity.md)
* [tensor.and](framework/operators/tensor/tensor.and.md)
* [Neural Network](framework/operators/neural-network/README.md)
* [nn.relu](framework/operators/neural-network/nn.relu.md)
* [nn.leaky\_relu](framework/operators/neural-network/nn.leaky\_relu.md)
Expand Down
115 changes: 58 additions & 57 deletions docs/framework/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,62 +4,63 @@ To see the full list of available ONNX Operators refer to [this table](https://g

You can see below the list of current supported ONNX Operators:

| Operator | Implemented |
| :-------------------------------------------------------------: | :------------------: |
| [MatMul](operators/tensor/tensor.matmul.md) | :white\_check\_mark: |
| [MatMulInteger](operators/tensor/tensor.matmul.md) | :white\_check\_mark: |
| [Add](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Sub](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Mul](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Div](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Equal](operators/tensor/tensor.equal.md) | :white\_check\_mark: |
| [Greater](operators/tensor/tensor.greater.md) | :white\_check\_mark: |
| [GreaterOrEqual](operators/tensor/tensor.greater\_equal.md) | :white\_check\_mark: |
| [Less](operators/tensor/tensor.less.md) | :white\_check\_mark: |
| [LessOrEqual](operators/tensor/tensor.less\_equal.md) | :white\_check\_mark: |
| [Abs](operators/tensor/tensor.abs.md) | :white\_check\_mark: |
| [Neg](operators/tensor/tensor.neg.md) | :white\_check\_mark: |
| [Ceil](operators/tensor/tensor.ceil.md) | :white\_check\_mark: |
| [Exp](operators/tensor/tensor.exp.md) | :white\_check\_mark: |
| [Ln](operators/tensor/tensor.log.md) | :white\_check\_mark: |
| [Reshape](operators/tensor/tensor.reshape.md) | :white\_check\_mark: |
| [Transpose](operators/tensor/tensor.transpose.md) | :white\_check\_mark: |
| [ArgMax](operators/tensor/tensor.argmax.md) | :white\_check\_mark: |
| [ArgMin](operators/tensor/tensor.argmin.md) | :white\_check\_mark: |
| [ReduceSum](operators/tensor/tensor.reduce\_sum.md) | :white\_check\_mark: |
| [CumSum](operators/tensor/tensor.cumsum.md) | :white\_check\_mark: |
| [Cos](operators/tensor/tensor.cos.md) | :white\_check\_mark: |
| [Sin](operators/tensor/tensor.sin.md) | :white\_check\_mark: |
| [Asin](operators/tensor/tensor.asin.md) | :white\_check\_mark: |
| [Flatten](operators/tensor/tensor.flatten.md) | :white\_check\_mark: |
| [Relu](operators/neural-network/nn.relu.md) | :white\_check\_mark: |
| [LeakyRelu](operators/neural-network/nn.leaky\_relu.md) | :white\_check\_mark: |
|[ThresholdedRelu](operators/neural-network/nn.thresholded\_relu.md)| :white\_check\_mark: |
| [Sigmoid](operators/neural-network/nn.sigmoid.md) | :white\_check\_mark: |
| [Softmax](operators/neural-network/nn.softmax.md) | :white\_check\_mark: |
| [LogSoftmax](operators/neural-network/nn.logsoftmax.md) | :white\_check\_mark: |
| [Softsign](operators/neural-network/nn.softsign.md) | :white\_check\_mark: |
| [Softplus](operators/neural-network/nn.softplus.md) | :white\_check\_mark: |
| [Linear](operators/neural-network/nn.linear.md) | :white\_check\_mark: |
| [HardSigmoid](operators/neural-network/nn.hard\_sigmoid.md) | :white\_check\_mark: |
| [Sinh](operators/tensor/tensor.sinh.md) | :white\_check\_mark: |
| [Asinh](operators/tensor/tensor.asinh.md) | :white\_check\_mark: |
| [Cosh](operators/tensor/tensor.cosh.md) | :white\_check\_mark: |
| [ACosh](operators/tensor/tensor.acosh.md) | :white\_check\_mark: |
| [Tanh](operators/tensor/tensor.tanh.md) | :white\_check\_mark: |
| [Acos](operators/tensor/tensor.acos.md) | :white\_check\_mark: |
| [Sqrt](operators/tensor/tensor.sqrt.md) | :white\_check\_mark: |
| [Onehot](operators/tensor/tensor.onehot.md) | :white\_check\_mark: |
| [Slice](operators/tensor/tensor.slice.md) | :white\_check\_mark: |
| [Concat](operators/tensor/tensor.concat.md) | :white\_check\_mark: |
| [Gather](operators/tensor/tensor.gather.md) | :white\_check\_mark: |
| [QuantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [DequantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [Nonzero](operators/tensor/tensor.nonzero.md) | :white\_check\_mark: |
| [Squeeze](operators/tensor/tensor.squeeze.md) | :white\_check\_mark: |
| [Unsqueeze](operators/tensor/tensor.unsqueeze.md) | :white\_check\_mark: |
| [Sign](operators/tensor/tensor.sign.md) | :white\_check\_mark: |
| [Clip](operators/tensor/tensor.clip.md) | :white\_check\_mark: |
| [Identity](operators/tensor/tensor.identity.md) | :white\_check\_mark: |
| Operator | Implemented |
| :-----------------------------------------------------------------: | :------------------: |
| [MatMul](operators/tensor/tensor.matmul.md) | :white\_check\_mark: |
| [MatMulInteger](operators/tensor/tensor.matmul.md) | :white\_check\_mark: |
| [Add](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Sub](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Mul](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Div](operators/tensor/#arithmetic-operations) | :white\_check\_mark: |
| [Equal](operators/tensor/tensor.equal.md) | :white\_check\_mark: |
| [Greater](operators/tensor/tensor.greater.md) | :white\_check\_mark: |
| [GreaterOrEqual](operators/tensor/tensor.greater\_equal.md) | :white\_check\_mark: |
| [Less](operators/tensor/tensor.less.md) | :white\_check\_mark: |
| [LessOrEqual](operators/tensor/tensor.less\_equal.md) | :white\_check\_mark: |
| [Abs](operators/tensor/tensor.abs.md) | :white\_check\_mark: |
| [Neg](operators/tensor/tensor.neg.md) | :white\_check\_mark: |
| [Ceil](operators/tensor/tensor.ceil.md) | :white\_check\_mark: |
| [Exp](operators/tensor/tensor.exp.md) | :white\_check\_mark: |
| [Ln](operators/tensor/tensor.log.md) | :white\_check\_mark: |
| [Reshape](operators/tensor/tensor.reshape.md) | :white\_check\_mark: |
| [Transpose](operators/tensor/tensor.transpose.md) | :white\_check\_mark: |
| [ArgMax](operators/tensor/tensor.argmax.md) | :white\_check\_mark: |
| [ArgMin](operators/tensor/tensor.argmin.md) | :white\_check\_mark: |
| [ReduceSum](operators/tensor/tensor.reduce\_sum.md) | :white\_check\_mark: |
| [CumSum](operators/tensor/tensor.cumsum.md) | :white\_check\_mark: |
| [Cos](operators/tensor/tensor.cos.md) | :white\_check\_mark: |
| [Sin](operators/tensor/tensor.sin.md) | :white\_check\_mark: |
| [Asin](operators/tensor/tensor.asin.md) | :white\_check\_mark: |
| [Flatten](operators/tensor/tensor.flatten.md) | :white\_check\_mark: |
| [Relu](operators/neural-network/nn.relu.md) | :white\_check\_mark: |
| [LeakyRelu](operators/neural-network/nn.leaky\_relu.md) | :white\_check\_mark: |
| [ThresholdedRelu](operators/neural-network/nn.thresholded\_relu.md) | :white\_check\_mark: |
| [Sigmoid](operators/neural-network/nn.sigmoid.md) | :white\_check\_mark: |
| [Softmax](operators/neural-network/nn.softmax.md) | :white\_check\_mark: |
| [LogSoftmax](operators/neural-network/nn.logsoftmax.md) | :white\_check\_mark: |
| [Softsign](operators/neural-network/nn.softsign.md) | :white\_check\_mark: |
| [Softplus](operators/neural-network/nn.softplus.md) | :white\_check\_mark: |
| [Linear](operators/neural-network/nn.linear.md) | :white\_check\_mark: |
| [HardSigmoid](operators/neural-network/nn.hard\_sigmoid.md) | :white\_check\_mark: |
| [Sinh](operators/tensor/tensor.sinh.md) | :white\_check\_mark: |
| [Asinh](operators/tensor/tensor.asinh.md) | :white\_check\_mark: |
| [Cosh](operators/tensor/tensor.cosh.md) | :white\_check\_mark: |
| [ACosh](operators/tensor/tensor.acosh.md) | :white\_check\_mark: |
| [Tanh](operators/tensor/tensor.tanh.md) | :white\_check\_mark: |
| [Acos](operators/tensor/tensor.acos.md) | :white\_check\_mark: |
| [Sqrt](operators/tensor/tensor.sqrt.md) | :white\_check\_mark: |
| [Onehot](operators/tensor/tensor.onehot.md) | :white\_check\_mark: |
| [Slice](operators/tensor/tensor.slice.md) | :white\_check\_mark: |
| [Concat](operators/tensor/tensor.concat.md) | :white\_check\_mark: |
| [Gather](operators/tensor/tensor.gather.md) | :white\_check\_mark: |
| [QuantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [DequantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [Nonzero](operators/tensor/tensor.nonzero.md) | :white\_check\_mark: |
| [Squeeze](operators/tensor/tensor.squeeze.md) | :white\_check\_mark: |
| [Unsqueeze](operators/tensor/tensor.unsqueeze.md) | :white\_check\_mark: |
| [Sign](operators/tensor/tensor.sign.md) | :white\_check\_mark: |
| [Clip](operators/tensor/tensor.clip.md) | :white\_check\_mark: |
| [Identity](operators/tensor/tensor.identity.md) | :white\_check\_mark: |
| [And](operators/tensor/tensor.and.md) | :white\_check\_mark: |

Current Operators support: **51/156 (33%)**
8 changes: 8 additions & 0 deletions src/numbers.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -390,6 +390,10 @@ impl FP8x23WNumber of NumberTrait<FP8x23W, u64> {
fn sign(self: FP8x23W) -> FP8x23W {
core_fp8x23wide::sign(self)
}

fn and(lhs: FP8x23W, rhs: FP8x23W) -> bool {
comp_fp8x23wide::and(lhs, rhs)
}
}

use orion::numbers::fixed_point::implementations::fp16x16::core::{FP16x16Impl, FP16x16};
Expand Down Expand Up @@ -732,6 +736,10 @@ impl FP16x16WNumber of NumberTrait<FP16x16W, u64> {
fn sign(self: FP16x16W) -> FP16x16W {
core_fp16x16wide::sign(self)
}

fn and(lhs: FP16x16W, rhs: FP16x16W) -> bool {
comp_fp16x16wide::and(lhs, rhs)
}
}

use orion::numbers::fixed_point::implementations::fp64x64::core::{FP64x64Impl, FP64x64};
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,15 @@ fn or(a: FP16x16W, b: FP16x16W) -> bool {
}
}

fn and(a: FP16x16W, b: FP16x16W) -> bool {
let zero = FixedTrait::new(0, false);
if a == zero || b == zero {
return false;
} else {
return true;
}
}

// Tests --------------------------------------------------------------------------------------------------------------

#[test]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,15 @@ fn or(a: FP8x23W, b: FP8x23W) -> bool {
}
}

fn and(a: FP8x23W, b: FP8x23W) -> bool {
let zero = FixedTrait::new(0, false);
if a == zero || b == zero {
return false;
} else {
return true;
}
}

// Tests --------------------------------------------------------------------------------------------------------------

#[test]
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_fp16x16wide.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,10 @@ impl FP16x16WTensor of TensorTrait<FP16x16W> {
core::clip(self, min, max)
}

fn and(self: @Tensor<FP16x16W>, other: @Tensor<FP16x16W>) -> Tensor<usize> {
math::and::and(self, other)
}

fn identity(self: @Tensor<FP16x16W>) -> Tensor<FP16x16W> {
core::identity(self)
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/tensor/implementations/tensor_fp8x23wide.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,10 @@ impl FP8x23WTensor of TensorTrait<FP8x23W> {
core::clip(self, min, max)
}

fn and(self: @Tensor<FP8x23W>, other: @Tensor<FP8x23W>) -> Tensor<usize> {
math::and::and(self, other)
}

fn identity(self: @Tensor<FP8x23W>) -> Tensor<FP8x23W> {
core::identity(self)
}
Expand Down

0 comments on commit 1245027

Please sign in to comment.