Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No calibration available #5

Open
chogan99 opened this issue Aug 8, 2024 · 4 comments
Open

No calibration available #5

chogan99 opened this issue Aug 8, 2024 · 4 comments

Comments

@chogan99
Copy link

chogan99 commented Aug 8, 2024

  1. No calibration available for camera's
  2. Transformation between camera's are not available in dataset

Where can we find these?

@belkhir-nacim
Copy link
Collaborator

Hello,
We uploaded a calibration script example on the github of the code
let us know if you need some help to use it

@chogan99
Copy link
Author

chogan99 commented Aug 9, 2024

Thank you, where can I find the calibration parameters of each camera

@chogan99
Copy link
Author

chogan99 commented Aug 9, 2024

I see on Zed 2 dataset there is a 120mm base between cameras but in the code translation is a a unit vector

def get_translation_matrix(translation_vector):
"""Convert a translation vector into a 4x4 transformation matrix
"""
T = torch.zeros(translation_vector.shape[0], 4, 4, device=translation_vector.device)

t = translation_vector.contiguous().view(-1, 3, 1)

T[:, 0, 0] = 1
T[:, 1, 1] = 1
T[:, 2, 2] = 1
T[:, 3, 3] = 1
T[:, :3, 3, None] = t

return T

@mhariat
Copy link

mhariat commented Nov 4, 2024

Hello,
You can find the calibration paremeters in the init_reprojection function

`
def init_retroprojection(BATCH_SIZE = 1, WIDTH_INFRA = 382, HEIGHT_INFRA = 288, f_INFRA = 620, WIDTH_ZED = 1920, HEIGHT_ZED = 1080, f_ZED = 1000):

PARAMS_INFRA = [f_INFRA, f_INFRA, WIDTH_INFRA / 2, HEIGHT_INFRA / 2]
K_INFRA = [[PARAMS_INFRA[0], 0., PARAMS_INFRA[2], 0],
           [0., PARAMS_INFRA[1], PARAMS_INFRA[3], 0],
           [0., 0., 1., 0]]
K_INFRA = torch.from_numpy(np.vstack((np.array(K_INFRA), [0, 0, 0, 1])).astype(np.float32)).unsqueeze(0)


PARAMS_ZED = [f_ZED, f_ZED, WIDTH_ZED / 2, HEIGHT_ZED / 2]
K_ZED = [[PARAMS_ZED[0], 0., PARAMS_ZED[2], 0],
         [0., PARAMS_ZED[1], PARAMS_ZED[3], 0],
         [0., 0., 1., 0]]
K_ZED = np.vstack((np.array(K_ZED), [-10, 10, 0, 1])).astype(np.float32)
invK_ZED = torch.from_numpy(np.linalg.inv(K_ZED)).unsqueeze(0)

T = get_translation_matrix(torch.Tensor([0, 0, 0]).unsqueeze(0))

backproject = BackprojectDepth(BATCH_SIZE, HEIGHT_ZED, WIDTH_ZED)
project = Project3D(BATCH_SIZE, HEIGHT_ZED, WIDTH_ZED)
return WIDTH_INFRA,HEIGHT_INFRA,K_INFRA,WIDTH_ZED,HEIGHT_ZED,invK_ZED,T,backproject,project

`
The 120mm base you're referring to is the distance between the lenses. This is not relevant in our case, as we are interested in capturing the image from one ZED Camera (left) under the point of view of the Infra Camera.
Let me know if you need any additional clarification!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants