-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
one-level network for 64x64 image size datasets #2
Comments
I can put something together, but we ran experiments with this and found it didn't perform nearly as well as 2-level. Do you want it anyway? |
Yes, that would be much appreciated. I'd like to compare performance on a few datasets under several conditions, especially celebA, which I can already successfully load in as jpg files. Thanks |
It doesn't have to run with this codebase if you have another that works already |
Just updated the code with a model that should work ('64px_big_onelevel'), but I haven't actually tested it. Let me know how it goes if you do. |
Thanks Ishaan! Likely citation incoming ;) I will give this a shot and let you know if it runs / how it performs. I don't know if you are aware, but this architecture seems quite SOTA for smallish datasets with manageable variation (more complex than MNIST, but less than CIFAR where data-to-complexity ratio seems off to me), outperforming even the best new GANs. That doesn't really come across in the paper. |
Interesting! SOTA in what sense, sample quality? I'd say LSUN bedrooms fits the description you mentioned (plenty of data, not terribly complex images) and we tried to demonstrate pretty good results there I think.
…________________________________
From: Joshua Peterson <[email protected]>
Sent: Thursday, June 29, 2017 1:24:10 PM
To: igul222/PixelVAE
Cc: Ishaan Gulrajani; Comment
Subject: Re: [igul222/PixelVAE] one-level network for 64x64 image size datasets (#2)
Thanks Ishaan! Likely citation incoming ;)
I will give this a shot and let you know if it runs / how it performs.
I don't know if you are aware, but this architecture seems quite SOTA for smallish datasets with manageable variation (more complex than MNIST, but less than CIFAR where data-to-complexity ratio seems off to me), outperforming even the best new GANs. That doesn't really come across in the paper.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#2 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ABBP7nyXlCuQyopMjToM5B_yZ5w2UdH7ks5sJAfqgaJpZM4OEvS3>.
|
Sample quality, but also a lower probability of artifacts that give away that the images aren't "real". Agreed on LSUN, and clearly you tried a large range of datasets, much more than most papers. I can show you some examples soon enough. |
Ok, it looks like training works with the new script, thanks! The sampling code does seem different though. The samples are int32 (isn't this wrong?), and it samples 8 points and then adds variability to those? |
Actually, the int32 is for the output images it seems, so that's fine. However, I'm still not sure where the variability is coming from in the rows of the output image |
Can you point to the sampling code you're talking about? The code that gets executed should be lines 855-902. |
Yes that's right. When this runs, I get 8 samples (rows), and then 8 variations (columns) on those samples. I'd like sampling behavior like the two-level network with the |
Any chance you can add this?
The text was updated successfully, but these errors were encountered: