-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to tansfer the the coordinate system to the blender's coordinate system #53
Comments
I saw the original scaling scripts that demonstrate the following operation: def normalize_scene(mesh):
bounds = mesh.bounding_box.bounds
bounds_min = bounds[0]
bounds_max = bounds[1]
scale_tmp = max((bounds_max[0] - bounds_min[0]), (bounds_max[1]-bounds_min[1]))
scale_tmp = max((bounds_max[2] - bounds_min[2]), scale_tmp)
scale_tmp = 0.9 / scale_tmp
offset = -(bounds_max + bounds_min) / 2
return scale_tmp, offset
# pesudo code.
mesh = trimesh.load("xxx.glb", process=False)
scale, offset = normalize_scene(mesh)
new_mesh = scale * (mesh.verts + offset) Then you can use our providing intrinsic and extrinsic parameters to blender's coordinates. |
I will give you an example to demonstrate how to transfer *.glb to Pytorch3D coordiante. Then, you can simply transfer pytorch3D coordinates to blender coordinate system following this issues and this documents def generate_obja_camera(view_dir, num_views=24, stride=1, start_idx=0):
view_dir = Path(view_dir)
R_lst = []
T_lst = []
scale_lst = []
for view_idx in range(0, num_views, stride):
view_json = view_dir / f"{view_idx:05d}" / f"{view_idx:05d}.json"
with open(view_json, "r") as fp:
cam_json = json.load(fp)
cam_pos = np.array(cam_json["origin"], dtype=np.float32)
cam_pos = cam_pos[[0, 2, 1]] * np.array([1, 1, -1], dtype=np.float32) # unity to pytorch3d?
pytorch3d_R = look_at_rotation(torch.from_numpy(cam_pos[None, :]))
# camera.set_intrin_params(img_h, img_w, focal, max(img_w, img_h))
pytorch3d_T = -pytorch3d_R[0].T @ torch.from_numpy(cam_pos).to(pytorch3d_R)
R_lst.append(pytorch3d_R)
T_lst.append(pytorch3d_T)
# scale_lst.append(torch.from_numpy(scale[None, :]).to(pytorch3d_R))
assert cam_json["x_fov"] == cam_json["y_fov"]
fov = cam_json["x_fov"]
R_lst = R_lst[start_idx:] + R_lst[:start_idx]
T_lst = T_lst[start_idx:] + T_lst[:start_idx]
R_lst.append(R_lst[0])
T_lst.append(T_lst[0])
# scale_lst.append(scale_lst[0])
Rs = torch.cat(R_lst, dim=0)
Ts = torch.stack(T_lst, dim=0)
# scales = torch.cat(scale_lst, dim=0)
mesh_scale = cam_json["scale"][0]
mesh_offset = torch.from_numpy(np.array(cam_json["offset"], dtype=np.float32)).to(pytorch3d_R)
# mesh_offset = mesh_offset[[0, 2, 1]]
return Rs, Ts, fov, mesh_scale, mesh_offset
def normalize_scene(mesh):
bounds = mesh.bounding_box.bounds
bounds_min = bounds[0]
bounds_max = bounds[1]
scale_tmp = max((bounds_max[0] - bounds_min[0]), (bounds_max[1]-bounds_min[1]))
scale_tmp = max((bounds_max[2] - bounds_min[2]), scale_tmp)
scale_tmp = 0.9 / scale_tmp
offset = -(bounds_max + bounds_min) / 2
return scale_tmp, offset
# Rs: Rotation Matrix, Ts: Translation Matrix, Fov
Rs, Ts, fov, mesh_scale, mesh_offset = generate_obja_camera(image_dir, start_idx=start_idx)
mesh = trimesh.load(mesh_path)
scale_tmp, offset = normalize_scene(mesh)
offset = torch.from_numpy(offset).to(mesh_offset)
mesh.export("./tmp.obj")
# for rednering
# following the pytorch3D or blender.
....... |
the offset key is not used in JSON file; this is a bug for rendering system. Please use (normalize)[https://github.com//issues/53#issuecomment-2366150631] to get scale and offset |
Thank you for your prompt response. However, when I use the scale and offset computed from |
Thanks for the great work. Now I have the glb file and want to transform and scale it in blender to match the rendering result in G-objaverse. But I failed, the json file provided in G-objaverse seems can't be used directly in Blender when doing transform in Blender. How can I get the righe transform parameters? thanks a lot.
The text was updated successfully, but these errors were encountered: