-
-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Room calibration configuration for world coordinate transform. #6
Comments
I made a quick sketch in the direction of a configuration store API in 07e7bc9 and 5157f4e ohmd_get_config() - a function that can read a requested configuration blob from the platform specific configuration store ( |
So, when it comes to configuration i think we should look into the bigger picture to extend to other drives as well. As mentioned on IRC, i think we should make sure there is a 'config-less' single camera setup where calibration would be done 'automatically' to get some sort of decent sitting down/quick testing setup, while introducing a broader configuration method for user configuration as well to tweak settings. This will probably result in a human readable config file (TOML (https://github.com/toml-lang/toml) has my preference), and whatever we need to dump the room/camera calibration data. I would prefer to keep this either in get_s and set_s, or ohmd_device_settings_sets, since we have internal driver configuration implemented there as well. Am fine looking into this, will also continue on my QT based openhmd tool that has room configuration. |
We can certainly have some sort of 'configuration-less' setup that assumes when the HMD is first found in any camera view that it's at 0,0,0 and facing forward or so. It's possible from there to reverse the position of each subsequent camera when they find the headset. |
Having a setup procedure and a configuration store is still best in order to really register the VR setup in a physical room and get correct user heights. TOML looks like a fine candidate for storing the configuration information. Since such a thing is driver specific (different drivers use different tracking methods and therefore want to store different information), so I was thinking in terms of internal API that reads or writes from a standard user configuration location, rather than placing the burden on each application. That said, a method where applications can specify an application private location for OpenHMD to store or load that information could also be useful |
Another useful thing to store in a private cache is the JSON blob (or the useful data parsed from it) that comes from each controller with LED positions. Reading the full blob over the radio causes a measurable delay at startup, but there is a hash in the firmware that can be read. If the hash doesn't change, you can use the cached copy of the JSON blob and save time. |
See also #25 |
Two things about configuration. Another thing, is with something like safety barriers (marked locations that you should avoid), it should be a proper 3D coordination. Not a simple vertical line. |
At the moment, there's no restriction on how you can space multiple sensors - it really is just about placing them so that you stay the devices stay in view of at least 1 as you move around. As for possibilities for chaperone / barrier configuration - that's still unexplored territory for me, but I think that will depend on what is possible in the things that are using OpenHMD. I don't know if SteamVR can do more than draw walls, for example. |
Unfortunate that ceilings aren't really possible, but on multiple sensors. Have you seen any difference in tracking between the spacing and amount of sensors you've been using? |
I mostly test with 1 sensor, because 2 is annoying. Until there's a room calibration utility, the code infers the position of the tracker camera(s) at startup based on the first observation of the headset. If that first tracking lock for each camera isn't very precise, the cameras end up reporting slightly different tracking positions, leading to shaking/fighting. It would be better to do some averaging or so - or to implement a proper calibration step and store known-good camera positions in a config file. Hence this open issue :) |
If I remember correctly, The oculus asks to verify direction by facing the monitor. This helps by using the position of the user as a ground point to work on everything. |
I have found that it has issues determining the direction of the ground(gravity vector?) if there are 2 cameras are at different heights(I have one on top of my pc, and one mounted on a tripod). Would be nice if we could calibrate for that too. |
I think the key will be to detect that the headset is being held still and use some averaging over a second or two to acquire a the calibration, instead of using a single pose observation for each camera. |
Hi @thaytan, I don´t have much time but I come from time to time to see if I can help. I normally get lost on the huge amount of branches you have. Do you follow any convention? It would be great if you can merge everything that's working to one branch so we can start look over + a small guide on how to setup the environment. Given the advances you made something I think it's required is:
I know I'm asking for a lot. But it will really help. Thank you again. |
Seems like merging kalman filter branch to master, and work in the simulation one is the right way. Correct? |
I don't know much about how to proper set the room setup. But from my master of computer vision I can remember that it's required two cameras to get a precise position. Otherwise there will be drift. And I suppose that with the point cloud of the headset we can have the distance to the camera. |
If I would implement this I will capture the frames from left right cameras, noted with timestamps to know relative drift, and run reconstruction algorithm with them, using both. Testing different versions to find a right compromise between speed and accuracy. For this to work we have to be able to capture the images save, and have a test to apply algorithms, compute timings and actual accuracy. The simulator can be great, but a simple integration test will do it, and we can add benchmarks. |
@gadLinux If you need help with testing, I can assist. |
Hi @Doomsdayrs Yes. Could you write a few lines in the readme on how to setup gstreamer or pipewire to capture and annotate the image generated... I want to build a small tool, maybe based on what libsurvive has to be able to test the algoritms. |
Moreover, there's no branch that I can take as reference cause features are spreaded all over the code. Maybe we can merge what's working (or mostly) now to a dev branch? |
I was able to get the video in matroska container. And the traces in json. Made a few changes on the libsurvive visor, and now started to try things. Still a lot to do to understand the whole code and code paths, but on my way. |
We need a configuration store that can record information about the room setup and relative positions of the Rift sensor cameras.
After that, we'll need a room setup utility like the official Rift software has, which can find one of the headset or controllers in the view of multiple cameras simultaneously and work backward to compute the relative position of the cameras to each other, and hence camera-to-world-coordinate transforms for each camera
The text was updated successfully, but these errors were encountered: