-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[converter] support new PyTorch operators #123
Comments
aten::len ops is not supported when I try to convert my resnet model to tflite |
Could you please tell me what is the corresponding op in PyTorch (e.g. Update: |
You are correct I am passing the model in scripting mode. Thanks for the help. |
Any plan to support quantized::instance_norm? |
Would you please give me an example with a TFLite model quantized InstanceNorm layer? Once it is supported in TFLite, we will look into that. |
@peterjc123 You can use the following code as test case. Many Thanks import torch
from torch import nn
from tinynn.graph.quantization.quantizer import QATQuantizer
from tinynn.graph.tracer import model_tracer
from tinynn.converter import TFLiteConverter
class ChannelWiseConv(nn.Module):
def __init__(self, in_channel, out_channel, padding=0, stride=1, dilation=1, bias=True, norm_layer='BN'):
super().__init__()
self.dwconv_quant = torch.quantization.QuantStub()
self.dwconv_dequant = torch.quantization.DeQuantStub()
self.pwconv_quant = torch.quantization.QuantStub()
self.pwconv_dequant = torch.quantization.DeQuantStub()
self.dwconv = nn.Conv2d(
in_channel,
in_channel,
kernel_size=3,
padding=padding,
stride=stride,
dilation=dilation,
groups=in_channel,
bias=bias
)
self.pwconv = nn.Conv2d(
in_channel,
out_channel,
kernel_size=1,
bias=bias
)
self.norm_layer = norm_layer
self.bn = nn.BatchNorm2d(out_channel)
self.instance_norm = nn.InstanceNorm2d(out_channel)
self.relu = nn.ReLU()
def forward(self, x):
x = self.dwconv_quant(x)
x = self.dwconv(x)
x = self.dwconv_dequant(x)
x = self.pwconv_quant(x)
x = self.pwconv(x)
if self.norm_layer == 'BN':
x = self.bn(x)
elif self.norm_layer == 'IN':
x = self.instance_norm(x)
x = self.relu(x)
x = self.pwconv_dequant(x)
return x
class Model(nn.Module):
def __init__(self):
super().__init__()
self.convs1 = ChannelWiseConv(3, 256, norm_layer='IN')
self.convs2 = ChannelWiseConv(256, 256, norm_layer='IN')
self.convs3 = ChannelWiseConv(256, 1, norm_layer='IN')
def forward(self, x):
x = self.convs1(x)
x = self.convs2(x)
x = self.convs3(x)
return x
if __name__ == '__main__':
model = Model()
quantizer = QATQuantizer(
model,
torch.randn(1, 3, 160, 256),
work_dir='out',
config={'rewrite_graph': False}
)
quantizer.quantize()
model.eval()
model.cpu()
model = quantizer.convert(model)
torch.backends.quantized.engine = 'qnnpack'
converter = TFLiteConverter(model, torch.randn(1, 3, 160, 256), tflite_path='instance_norm.tflite')
converter.convert() |
I mean if there isn't quantized InstanceNorm support for TFLite, then there is not much we can help here. Your code only generates a quantized PyTorch model with InstanceNorm, which is not thing I actually need. In short, we don't currently support that because we don't know how to achieve that. We may only action when the solution is clear (which operators to use and how the graph is organized). The only thing we can do now is to wrap it with Quantize and Dequantize nodes during translation, which I guess it not what you want right? |
Is there any chance to implement torch
|
Please create a new issue for that. |
Below are the PyTorch operators that are yet to be supported.
Unclassfied (New)
N/A
Primitives (Python operators)
Very easy (Constant generation or aliasing)
Easy (Direct mapping)
Medium (Composite of multiple operators)
Hard (No mapping or the mapping is too complex)
The text was updated successfully, but these errors were encountered: