-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mesh conditioning instead of text conditioning #77
Comments
Do you mean taking a mesh, then encode it into a vector embedding? Which then you can use to refine to create different versions of it? It's possible, I don't think the author will do it since he have moved on but if it's possible to do this with the current lib, the text conditioner is just a class with the text embedding model. So the transformer doesn't have access to the actual "text" and only uses the embedding vector, so in theory it's a easy replacement. However training a model to create a good embedding of a mesh model is another thing. Which I'm not 100% how to even think about. |
Yes, is the current autoencoder a good fit for creating mesh embeddings for this purpose? |
Kinda, it will encode the mesh to a list of tokens/codes. You can then create some kind of vector of this. |
Is the face_embed_output that it produces not suitable for this? |
The encoder will output Fx192 meaning it will create a embedding for each triangle and not the entire mesh. So no. :( |
what I would recommend is just to encode the prompt and response meshes, and use a separator token in between will require work to handle the special separator token |
I was wondering if this was discussed before. The idea is to condition on existing meshes rather than text. This would be particularly useful in training it to retopologize existing meshes.
The text was updated successfully, but these errors were encountered: