Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question regarding the comparison with the DCGAN #31

Open
tonyyunyang opened this issue Oct 27, 2024 · 1 comment
Open

Question regarding the comparison with the DCGAN #31

tonyyunyang opened this issue Oct 27, 2024 · 1 comment

Comments

@tonyyunyang
Copy link

Hi,
I love your videos, I have literally watched every single second of them.
I would like to ask that here you said in the video that, it now can take care of input of all sizes. What does that mean, I am hoping if you could exaggerate on that a bit.

class Discriminator(nn.Module):

@explainingai-code
Copy link
Owner

Thank you so much for your support :)
A regular DCGAN discriminator maps inputs of say shape 256x256 to single scalar output, so in scenarios where you need to feed images of different size(512 x512) you would require to change the architecture, and add in more layers to achieve the same single scalar output on this larger input. But PatchGan maps input to an array of patches with discriminator generating predictions for patches of size NxN. This means that even when you change discriminator inputs from 256x256 to 512x512, you dont need to change anything in the architecture, the only difference would be that now the discriminator generates predictions for four times more patches than before.
That is what I was referring to in the video.
Hope its clearer now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants