-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: update operator list in torch support's documentation section #486
docs: update operator list in torch support's documentation section #486
Conversation
- [`torch.nn.BatchNorm2d`](https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html) | ||
- [`torch.nn.BatchNorm3d`](https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html) | ||
- [`torch.erf, torch.special.erf`](https://pytorch.org/docs/stable/special.html#torch.special.erf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a doubt in putting these two here (erf + pad), but I think it's ok
- [`torch.ge, torch.greater_equal`](https://pytorch.org/docs/stable/generated/torch.ge.html) | ||
- [`torch.lt, torch.less`](https://pytorch.org/docs/stable/generated/torch.lt.html) | ||
- [`torch.le, torch.less_equal`](https://pytorch.org/docs/stable/generated/torch.le.html) | ||
- [`torch.eq`](https://pytorch.org/docs/stable/generated/torch.eq.html) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
side note: torch.eq is not the same as torch.equal , and I think we support torch.eq but to be confirmed maybe
|
||
- `brevitas.nn.QuantLinear` | ||
- `brevitas.nn.QuantConv1d` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a doubt here : is supporting torch.conv1d
enough for supporting brevitas.nn.QuantConv1d
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so yeah
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we support all Brevitas operators that are equivalent to those already supported in Torch.
So the first sentence seems more appropriate.
Should we add: to this list: qnn.QuantLinear, qnn.QuantIdentity, qnn.QuantReLU
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
qnn.QuantLinear
is in this list and qnn.QuantIdentity
is below in the "Quantizers" sub-section
but for other like activation functions, @andrei-stoian-zama pointed out to me that we should not suggest users to use them as :
- it's less certain that we indeed do support them
- we most likely never need them after all, as activation functions don't require quantization (done with a TLU), we can just use
torch.relu
so I don't think there's much to add here !
|
||
- `brevitas.nn.QuantLinear` | ||
- `brevitas.nn.QuantConv1d` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so yeah
A user pointed that that this list was not up to date. Turns out we indeed added/changed quite some things since we've updated this section.
closes https://github.com/zama-ai/concrete-ml-internal/issues/4268