Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding the setting of transposed convolution in unet_decoder.py. #2

Open
waxybywmyyfbk opened this issue Sep 4, 2023 · 1 comment

Comments

@waxybywmyyfbk
Copy link

I hope this message finds you well. I've been studying your code. In the original code at dynamic-network-architectures/dynamic_network_architectures/building_blocks/unet_decoder.py on line 53, I noticed that both the kernel_size and stride parameters of the transposed convolution are set using the stride value of the corresponding stage in the encoder. This approach seems a bit different from the common practice, where we typically set the kernel_size of the transposed convolution to match the kernel_size of the corresponding stage in the encoder. I was wondering if this design choice was intentional or perhaps an oversight?

Thank you for your time and consideration. I look forward to your response.

@FabianIsensee
Copy link
Member

See this: https://distill.pub/2016/deconv-checkerboard/

And this video which explains it nicely: https://www.youtube.com/watch?v=ilkSwsggSNM

Probably doesn't impact things a lot in practive but having non-overlapping kernels seemed nicer to me
Best,
Fabian

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants