You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hello! i'm working with a team on closing this bounty and we're using PyTorch's VGG network as our style transfer solution. We have successfully quantized the network with pretrained weights using PyTorch's native quantization support, and we intend to perform inference using an FHE client to demonstrate our progress. We are taking inspiration from the image filter example provided here.
We are running into the issue, however, that the compile_torch_model function (member of concrete-ml either throws an assertion error or times out.
Issue by a user (@c-gamble), that I copy here:
hello! i'm working with a team on closing this bounty and we're using PyTorch's VGG network as our style transfer solution. We have successfully quantized the network with pretrained weights using PyTorch's native quantization support, and we intend to perform inference using an FHE client to demonstrate our progress. We are taking inspiration from the image filter example provided here.
We are running into the issue, however, that the
compile_torch_model
function (member ofconcrete-ml
either throws an assertion error or times out.We define our VGG model using a helper function:
And we define our calibration input as follows:
To make the compilation fail, we run
python generate_dev_files.py
with the model and inputs initialized as shown above. The error we receive isHowever, to make the script timeout, we can change either or both of the model/inputs to floats using a
.float()
invocation after initialization.Any guidance would be greatly appreciated!
The text was updated successfully, but these errors were encountered: