You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Many models are coming online from several directions which enable users to generate meshes unconditionally, from text guidance or image prior. These projects are harder to coordinate on because they are not well represented in HuggingFace's model hub or inference API, and that affects downstream work like Microsoft's MII inference pipeline which is tightly integrated with HuggingFace.
The goal of this feature request is to, looking at the future, consider adding 3D mesh tasks as a standard task type.
Describe the solution you'd like
Add support for 3D mesh responses. This is similar to images, but the mesh and texture can be separated in some format cases, so this will need to be considered. Some meshes may also have multiple parts or images, although in practice no model has done this.
The popular formats this takes are the following:
.OBJ model, .PNG texture and .MTL material description
FBX model with texture embedded
GLB (binary GLTF) model with texture embedded
Raw numpy, npz or npy file array
ZIP file containing some custom data or other format
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Many models are coming online from several directions which enable users to generate meshes unconditionally, from text guidance or image prior. These projects are harder to coordinate on because they are not well represented in HuggingFace's model hub or inference API, and that affects downstream work like Microsoft's MII inference pipeline which is tightly integrated with HuggingFace.
The goal of this feature request is to, looking at the future, consider adding 3D mesh tasks as a standard task type.
Example of Img2Mesh
https://github.com/monniert/unicorn
Example of Text2Mesh
https://github.com/ashawkey/stable-dreamfusion
Example of Unconditional Mesh Generation
https://nv-tlabs.github.io/GET3D
Example of text-guided animation with motion diffusion
https://github.com/GuyTevet/motion-diffusion-model
Describe the solution you'd like
Add support for 3D mesh responses. This is similar to images, but the mesh and texture can be separated in some format cases, so this will need to be considered. Some meshes may also have multiple parts or images, although in practice no model has done this.
The popular formats this takes are the following:
The text was updated successfully, but these errors were encountered: