Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'float' object has not attribute 'astype' #522

Closed
malickKhurram opened this issue Feb 21, 2024 · 18 comments
Closed

AttributeError: 'float' object has not attribute 'astype' #522

malickKhurram opened this issue Feb 21, 2024 · 18 comments
Labels
bug Something isn't working

Comments

@malickKhurram
Copy link

1

Summary

I have created a image classification model in PyTorch and trained on unencrypted data. After that the model is exported using ONNX and import back and checked.
Now when the model is compiled with "compile_onnx_model", it gives following error.

Description

  • versions affected:
  • python version: 3.9.13
  • config (optional: HW, OS):
  • workaround (optional): if you’ve a way to workaround the issue
  • proposed fix (optional): if you’ve a way to fix the issue

Step by step procedure someone should follow to trigger the bug:

minimal POC to trigger the bug

print("Minimal POC to reproduce the bug")

@malickKhurram malickKhurram added the bug Something isn't working label Feb 21, 2024
@jfrery
Copy link
Collaborator

jfrery commented Feb 21, 2024

I suppose this is the same issue as in #516 (comment). So you are trying to compile the densenet and getting this error after you replaced the global average pooling by the average pooling? Could you share the changes you did to the densenet in your local file so that we can reproduce exactly the error you are getting?

@malickKhurram
Copy link
Author

I replaced the globalaveragepool line with
nn.AvgPool2d(kernel_size=7, stride=1, padding=0)

The pytorch model trained successfully. Also it was exported as onnx and the above issue arise when compiling to concrete.

@jfrery
Copy link
Collaborator

jfrery commented Feb 22, 2024

Can you also share what's your input shape?

@malickKhurram
Copy link
Author

malickKhurram commented Feb 22, 2024

Input is 3x 150 x 150 image.

@jfrery
Copy link
Collaborator

jfrery commented Feb 22, 2024

If I put a (1, 3, 150, 150) input in the densenet121, I get a tensor of dimension (1, 1024, 4, 4) before this line

out = F.adaptive_avg_pool2d(out, (1, 1))

But the adaptive_avg_pool2d wants the output to be (1, 1024, 1, 1). I am not sure how nn.AvgPool2d(kernel_size=7, stride=1, padding=0) would work.

Setting a out = F.avg_pool2d(out, kernel_size=(4, 4)) should be good in your context.

@andrei-stoian-zama
Copy link
Collaborator

Thanks for the bug report.

It's hard to tell where the error comes from. The line you show uses numpy functions so it should return numpy.float instead of python float. Could you print the values and types of stats.rmax, stats.rmin, options.n_bits, self.offset just before that line?

Alternatively could you give code that reproduces the issue ?

@malickKhurram
Copy link
Author

IMG_20240226_112706
My bad. I used the same changes. Please see image.
But the above error is still there.
At line 527 of quantizer.py in concrete ml, it tries to use astype on a float. Which is throwing this error.

Your update on this is highly appreciated.

@andrei-stoian-zama
Copy link
Collaborator

Sorry but that's not enough to help you debug this. Could you modify the code of quantizer.py as described in the previous comment and give the output ?

@malickKhurram
Copy link
Author

Hi, Thank you for your reponse.
Please see attached values and types.
Also I commented astype casting and printed self.scale.

  1. It seems that error is due to casting float into a float again and float has no astype method.
  2. Also when I commnet this casting and run it. It now throws error on line 776.
  3. Both screenshots are attached.
    IMG_20240227_100945
    Uploading IMG_20240227_100757.jpg…

@malickKhurram
Copy link
Author

IMG_20240227_100945
Uploading IMG_20240227_100757.jpg…

@malickKhurram
Copy link
Author

@jfrery
Copy link
Collaborator

jfrery commented Feb 28, 2024

Would it be possible for you to setup a GitHub repo with the code? It's not easy for us to reproduce with screenshots.

That being said, I see here that it's trying to quantize to 1024 bits. This is certainly going to be a problem. Are you setting the n_bits to 1024 ?

@andrei-stoian-zama
Copy link
Collaborator

The float value that is casted should never be float but numpy.float. Something is causing it to be of the wrong datatype.

Please use n_bits=8, which should give a typical scale value instead of 10E-309.

@malickKhurram
Copy link
Author

Hi @andrei-stoian-zama , I really appreciate your help. I have added code to github repo. Here is the link.
https://github.com/malickKhurram/COVID-19-Detection

  1. Please use this notebook. Links to datasets used are given inside the notebook.
  2. I have exported the model using onnx.
  3. I skipped the compile_onnx_model code. You can set your bits and other parameters and compile this mode.
    I really need to compile this model to be concrete ml compatible. Your help in this regard is required.

@andrei-stoian-zama
Copy link
Collaborator

One first observation is that the compile_onnx_model code that you show uses an input set of 5000 examples of shape (1024,). However, the network expects (1,150,150). Are you sure the model in the notebook is the right one ?

Furthermore doing:

from concrete.ml.torch.compile import compile_onnx_model

inputset = np.random.uniform(-100, 100, (50, 1024))

compile_onnx_model(onnx_model, inputset, n_bits=8)

raises the GlobalAveragePool error, can you show how do you modify your model to change that to standard average pool ?

I tried exporting only the model.features submodel like this:

import onnx
# Input to the model
x = torch.randn(1, 3, 150, 150, requires_grad=True)
torch.onnx.export(model.features, x, "my_image_classifier.onnx",export_params=True)
onnx_model = onnx.load("my_image_classifier.onnx")
onnx.checker.check_model(onnx_model)

from concrete.ml.torch.compile import compile_onnx_model

inputset = np.random.uniform(-100, 100, (50, 3, 150, 150))

compile_onnx_model(onnx_model, inputset, n_bits=8)

and it raises AssertionError: All inputs must have the same scale and zero_point to be concatenated., which is normal as Concrete ML does not yet support concatenation in post-training quantization import (compile_torch/compile_onnx).

Can you please provide some minimal code that reproduces the float object erorr ?

@malickKhurram
Copy link
Author

Thank you for your help.
I am no more getting float error after using the exact same parameters as you have used.
I have 2 questions and would appreciate your help.

  1. Is there a fix or way around to this AssertionError?
  2. Can you please share some image classification based model which I can compile to concrete ml.

@malickKhurram
Copy link
Author

@andrei-stoian-zama
@jfrery
Can you please suggest on above thread.

@andrei-stoian-zama
Copy link
Collaborator

I will close this thread as there is one about the missing GlobalAveragePooling.

For image classification please see the cifar example: https://github.com/zama-ai/concrete-ml/tree/main/use_case_examples/cifar/cifar_brevitas_finetuning

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants