You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope this message finds you well. I've been studying your code. In the original code at dynamic-network-architectures/dynamic_network_architectures/building_blocks/unet_decoder.py on line 53, I noticed that both the kernel_size and stride parameters of the transposed convolution are set using the stride value of the corresponding stage in the encoder. This approach seems a bit different from the common practice, where we typically set the kernel_size of the transposed convolution to match the kernel_size of the corresponding stage in the encoder. I was wondering if this design choice was intentional or perhaps an oversight?
Thank you for your time and consideration. I look forward to your response.
The text was updated successfully, but these errors were encountered:
I hope this message finds you well. I've been studying your code. In the original code at dynamic-network-architectures/dynamic_network_architectures/building_blocks/unet_decoder.py on line 53, I noticed that both the kernel_size and stride parameters of the transposed convolution are set using the stride value of the corresponding stage in the encoder. This approach seems a bit different from the common practice, where we typically set the kernel_size of the transposed convolution to match the kernel_size of the corresponding stage in the encoder. I was wondering if this design choice was intentional or perhaps an oversight?
Thank you for your time and consideration. I look forward to your response.
The text was updated successfully, but these errors were encountered: