-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inversion doesn't look like the face of img source #2
Comments
Hi @molo32, |
|
It seems like you run our encoder correctly. Generally speaking, our pretrained e4e encoder is specifically designed to balance the tradeoffs existing in the StyleGAN's latent space (See our paper for further details and examples). If exact reconstruction is what you seek, direct optimization will always yield the best results, or alternatively, you can control the tradeoff yourself according to your needs. |
Relevant: rolux/stylegan2encoder#2 (comment) (posted in January 2020)
That is the kind of idea that you find in the paper: a good inversion is the result of a trade-off between i) perception (visual quality in terms of a realistic output), ii) distortion (visual quality in terms of an output close to the input), and iii) edit-ability (semantics). If you look at the projected face of Angelina Jolie, you can see that it looks like a human face (perception), it slightly looks like Angelina Jolie (distortion), and it should hopefully change according to plan if you try to edit it (edit-ability). Closely related, if you want to get an idea of what to expect from projections as implemented:
then you can check the results shown in the README of my repository: https://github.com/woctezuma/stylegan2-projecting-images Basically, the more constrained the projection, the higher the distortion, but the output should behave better. |
inversion doesn't look like the face of img source,How can I make it look more like img source?
The text was updated successfully, but these errors were encountered: