-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about projection #16
Comments
Hi, thanks for your interest in our work. You can put the following code in train.py to get the projection results. RT = torch.cat([torch.tensor(viewpoint_cam.R.transpose()), torch.tensor(viewpoint_cam.T).reshape(3,1)], -1)[None, None].cuda() test_image = gt_image.clone().permute(1,2,0) |
Thanks a lot!And I'd like to know what coordinate system is mean3D defined in,in world or in smpl? |
In world coordinate. |
Thanks.I found that the evaluation metrics are calculated within the entire range of the image, and I want to only calculate psnr, ssim, and lpips within the mask range of the human body,could you help me solve this? |
Hi, you can calculate the psnr, ssim, and lpips with the help of a bounding box mask. |
Thanks for replying,I'd like to calculate the psnr, ssim, and lpips inside human mask(denote as bkgd_mask in your code),I found this code in render.py: rendering.permute(1,2,0)[bound_mask[0]==0] = 0 if background.sum().item() == 0 else 1 Can I replace bound_mask to bkgd_mask so that I will achieve my purpose? |
Thanks for your question. In 3D human reconstruction, we learn both the 3D human part and also the background part. By following the routine of HumanNeRF papers, we can either calculate the metric based on the whole image or the image cropped by a bound box mask. |
Thanks a lot!Additionally, I‘d like to ask about the requirements for training perspectives and the number of training images in Gauhuman. I found Gauhuman selected training view as [4] and sampled a total of 100 images per 5 frames. In order to compare with other methods, I modified the training setting and set the training view as [0].,and 570 continuous images were taken for training, but the results were very poor. Why did the result drop so baddly? |
Hi, we follow the setting of instant-nvr for performance comparison. Is the performance drop consistent for both the ZJU_MoCap and MonoCap data sets? |
Hi, @skhu101 @yejr0229 . Continue with the problem mentioned above #16 (comment). If the means3D is defined in the world space, why should we tranform the means3D from the smpl space to world space like the code below: |
Hi, means3D is defined in world space. We transform the canonical SMPL pose from world space to SMPL space and then transform the target SMPL pose from SMPL space to the world space. |
Hi, could you tell me where do you implement the transformation of means3D from world space to SMPL space? Besides, I want to process inverse LBS in posed means3D, how can I get bweight of each posed means3D? |
Hi, you can refer to this function to find the transformation details and bweight of each posed means3D. GauHuman/scene/gaussian_model.py Line 631 in 732ec0d
|
Hi,I want to project the world_src_pts onto 2D image plane,and I use the projection() function from your previous work SHERF,and the projected 2d points seems like wrong,could you help me solve this?Following is my code:
src_uv = projection(world_src_pts.reshape(bs, -1, 3), camera_R, camera_T, camera_K) # [bs, N, 6890, 3]
src_uv = src_uv.view(-1, *src_uv.shape[2:])
Here camera_K is the camera intrinsic'),camera_R is the ['R'] in smpl_param,camera_T is the ['Th'] in smpl_param.
The text was updated successfully, but these errors were encountered: