-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AssertionError when running my model #18
Comments
Please post your model and code here so we can provide more help. Thanks. |
Maybe you can add |
My model is as follows: class MModel(nn.Module):
Adding --conv_mode matrix flag seems to have helped with my previous problem, but there is a new error: File "/Users/alpha-beta-CROWN-main/complete_verifier/auto_LiRPA/optimized_bounds.py", line 1054, in init_slope |
I see you mentioned in a previous issue that only one maxpooling layer is currently supported? Is it because I have three maxpooling layers in my network structure that the code is not working? |
Is it possible to change the maxpooling to avgpooling in your model?
The maxpooling will yield looser bound relaxation since it’s a non-linear layer.
|
You may also convert maxpool to relu: |
I tried this approach, but a new problem emerged. /alpha-beta-CROWN-main/complete_verifier/auto_LiRPA/bound_general.py:944: UserWarning: Creating an identity matrix with size 65536x65536 for node BoundBatchNormalization(name="/127"). This may indicate poor performance for bound computation. If you see this message on a small network please submit a bug report. After the above message, the program will get stuck, and then be killed. |
Thanks, I will try it. |
It is supported. However, max-pooling is not a verification-friendly layer and should be avoided when designing a verification-aware network architecture. Use average pooling if possible.
Generally, using
Please give us a complete stack trace, and a complete program, models, and instructions so we can help you more. @shizhouxing It might be related to the part on creating optimizing variables, related to what you are working on right now. We definitely need to clear this part in the next release. |
Thanks for your replay.
and the config is:
The shape of the data in the dataset is:
The instruction is:
The complete stack trace is:
When use Besides, are there any papers introducing why max-pooling is not a verification-friendly layer? Thank you all for help! |
Is there any news about this issue? I have a similar problem. |
@nbdyn @yusiyoh If you are still working on it, could please send all the necessary files for me to test your models? For example, currently I don't have the
I don't think we have a paper for it. It was introduced in auto_LiRPA. Basically, for a convolution, an output neuron only depends on a patch on the input, not every element on the input. Based on this property, the patches mode only tracks the necessary dependency to significantly save the memory cost.
I am not aware of such a paper. But max pooling is nonlinear and it is hard to tightly bound it, while avg pooling is simply linear. |
error:
in "patches.py" line 380, in inplace_unfold
assert len(kernel_size) == 2 and len(padding) == 4 and len(stride) == 2
I just add my model in model_defs.py, but it will report the error above.
My model contains Conv2d, BatchNorm2d, ReLU, MaxPool2d and Dropout. Besides, my dataset's shape is (6000, 1, 1024, 2).
What are the possible reasons for such an error? Will the difference between the two dimensions of the dataset have an effect on the comparison?
The text was updated successfully, but these errors were encountered: