Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion about input #4

Open
BukuBukuChagma opened this issue Dec 5, 2023 · 1 comment
Open

Confusion about input #4

BukuBukuChagma opened this issue Dec 5, 2023 · 1 comment

Comments

@BukuBukuChagma
Copy link

If I got this right, If I need a color mesh for lets say my own rgb image, I first need to convert it to a 3d mesh using some other available models such as PiFuHD or EVA3D etc and then input that mesh into this using the retexturing weights? Is this correct, or am I missing something.

@X-zhangyang
Copy link
Collaborator

The input of Get3DHuman are shape and texture latent codes sampling from Guassian distribution.
The target of Get3DHuman is to generate 3D textured mesh represented by implicit features and fixed pifu decoder, not to image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants