-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calibrating multiple projectors & kinects? #151
Comments
hello!! I haven't tried RoomAlive, although I can guess the general jazz of it from the videos i saw a while back. how did you get Rulr to compile? did you start with one of the SDK downloads?
the view transform corresponds to the rigid body transform between the camera frame of reference and the world frame of reference in opengl commonly the model and view matrices are stuffed together as modelview matrix the kinect's color<>depth is handled by the kinect sdk in Rulr, the calibration is between kinect's WORLD coordinate and projector's PIXEL coordinate happy to discuss more |
Thanks! That seems to make sense more or less, but I guess the part that I'm still hung up on is that Rulr's calibration seems to include...
Whereas what RoomAlive spits out includes:
So I don't understand why Rulr's calibration is 2 transforms while RoomAlive's (projector info) is a 1 transform and a camera matrix? I would think that since both systems allow you to render your scene from the projector's point of view, they must use/need the same information to set up the view correctly? I'm interested in using RoomAlive's calibration output since it's a relatively straightforward way of calibrating multiple Kinects & multiple projectors, which I'm not sure if you can do in Rulr? |
so all the kinect info is available from the SDK directly, i.e. Rulr doesn't try and calibrate this you can think of the view transform as being equivalent to the pose/transform, and the projection transform being equivalent to the camera matrix Notes:
(generally the calibration produces the camera matrix, and then you convert this into a projection matrix for use in live graphics)
(again, the view matrix is what you actually need in a live rendering pipeline) |
Hi again. Is it possible to calibrate multiple kinects and projectors in Rulr, as you can do in RoomAlive?
Related, can you shed some light (heh) on what is calculated as part of the calibration in Rulr? From looking at what gets used in
ofxRulr::Camera::beginAsCamera()
, it looks like there are 2 transformation matrices: view & projection. Does one of these correspond to the kinect's color ⟷ depth transform, and the other for depth ⟷ projector? Shouldn't there also be a camera matrix in there (for the projector's view, I think) as well? I'm looking at what's in the RoomAlive calibration and trying to make sense of it all, as it seems like each calibration should contain the same type of data, but it's not clear to me. Thanks!The text was updated successfully, but these errors were encountered: