You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I got this right, If I need a color mesh for lets say my own rgb image, I first need to convert it to a 3d mesh using some other available models such as PiFuHD or EVA3D etc and then input that mesh into this using the retexturing weights? Is this correct, or am I missing something.
The text was updated successfully, but these errors were encountered:
The input of Get3DHuman are shape and texture latent codes sampling from Guassian distribution.
The target of Get3DHuman is to generate 3D textured mesh represented by implicit features and fixed pifu decoder, not to image.
If I got this right, If I need a color mesh for lets say my own rgb image, I first need to convert it to a 3d mesh using some other available models such as PiFuHD or EVA3D etc and then input that mesh into this using the retexturing weights? Is this correct, or am I missing something.
The text was updated successfully, but these errors were encountered: