Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PSA: Watch PSVRTracker #14

Open
m-7761 opened this issue Oct 19, 2018 · 20 comments
Open

PSA: Watch PSVRTracker #14

m-7761 opened this issue Oct 19, 2018 · 20 comments

Comments

@m-7761
Copy link

m-7761 commented Oct 19, 2018

I think there may be movement yet for PSVR... watch this (https://github.com/HipsterSloth/PSVRTracker) space. I think it will be the new PSVRToolkit at some juncture. (I'm going to start working with cameras before long.)

P.S. Here (http://oddsheepgames.com/?topic=here-is-a-demo-that-has-great-anti-drift-steadiness-qualities) is info on a demo (https://www.reddit.com/r/KingsField/comments/9kj984/very_early_playable_section_of_kf2_for_windows/) that includes a hybrid approach to defeating drift that works very well. I talked about developing this before. Just for the record, it works well, and is straightforward to implement. With it, you can easily get by without position tracking, since the approximation of the set's orientation is very good. It can get thrown off, but for the most part, you can work for 30mins or hours and not drift. It seems especially good at defeating roll drift, which I think is most important. I don't know why.

FYI: That demo's FOV is fixed for PSVR but uses a pretty obnoxious value to approximate the game it's porting. Alt+F3 can change the value in the INI file. The original game distorts the picture in some way, that I don't yet understand. It's pretty interesting that it makes such high FOV possible.

@dylanmckay
Copy link
Owner

Great find!

I trawled through the King's field ZIP but could not find any source code, it'd be cool to hear a bit more information about the hybrid algorithm.

I've been using a Madgwick filter from a library, plugging the same beta value in as one of the PSVR example projects. Your oddsheepgames post talks about having two beta values, which seems incompatible with plain-old-madgwick. What IMU algorithm are you using?

@dylanmckay
Copy link
Owner

Just spotted this, makes sense now

In the meantime I quickly developed a hybrid system that blends two side-by-side Madgwick integrators (mainly copied from existing sources) so that one is optimized for its anti-drift quality, and the other for its picture stability, so that when your head isn't moving (imperceptible movements) the more drifty/stable position is shown, until that is, it drifts too far from the other's better estimate. At which point some fudging is required to reel it in to the orbit of the more chaotic/correct approximation.

@dylanmckay
Copy link
Owner

dylanmckay commented Oct 19, 2018

I've added a link to PSVRTracker and all the other examples I know of to the docs in d29ed97.

@m-7761
Copy link
Author

m-7761 commented Oct 19, 2018

Sorry, I just updated/cleaned up (removed space based indents) the source code. The relevant code is in this (http://svn.swordofmoonlight.net/code/SomEx/som.mocap.cpp) file which I think I've shared before. It's buried in game type logic. The source files are usually updated with each release, however this demo is between releases, and is not really meant to demonstrate this, but I took an hour or two to improve its PSVR experience before publishing it.

The block that does the blending, I just threw together intuitively. It looks fine. It begins with the following comment:

//slerp/attractor: should be aggressive as long as head is not still

FWIW I think this probably offers far better experience than any existing PSVR projects have... outside of the PS3. Possibly there too! But I'm not basing that on anything, other than code I've seen is not using the higher beta value. Trinus PSVR's maintainer contacted me a while back and said they were adding an INI like option to change the beta factor, but I think this is not useful in itself. They had not thought about it, and after investigating determined that it can technically depend on factors like altitude and temperature, humidity, etc, but this all seems to me like the difference due to these factors would not be more than fine tuning, and that the 0.1 value must be inherently far better, regardless of these factors...

I'm a little superstitious that using float works better for defeating drift, which is why I implemented the integrator class as a template. It's hard to say, because all I can go on without a deep understanding of the difference is spending time in VR with either valueformat, and saying anecdotally if drift seemed better or worse, which is really hard to say if drift is nearly eliminated. It's certainly far better than my unit's Home Theater mode, usomg the original firmware. Which suggests that Sony used an inferior method at production time. (I think my color correction is a lot better than theirs too, but it still has a lot of room for improvement, and is not methodical enough. I kind of don't like to think about how bad the color is out of the box... and the fact that it's never going to be satisfying.)

EDITED: For the record, the source code is in the project development software (Sword of Moonlight) whereas the demo is a product developed with said development software.

@m-7761
Copy link
Author

m-7761 commented Oct 19, 2018

There is a bug (I think*) in the following code (I noticed looking over it) that kind of helps to illustrate how calibrating the "steadycam" doesn't really matter since it converges on the "antidrift" calibration almost instantly regardless. FWIW not wanting to overthink it, the attractor code converges on the bind-pose of the antidrift target, and since that doesn't change unless recentered, they both end up with the same bind-pose in short order. They are implemented independently mainly because it's less code that way.
*I may have done it intentionally, since it is pretty pointless to calibrate... but if so I forget about it :)

		//scalar2 beta; 
		if(samples>-samplesN/4*3) //-1500
		{
			beta = beta2 = 1.5f;
		}
		else if(samples>-samplesN)
		{
		//	beta = best; //0.05f;
		}
		else //beta = best; //beta = 0.035f;

		//MadgwickAHRS(angularAcceleration,linearAcceleration,beta,interval);		
		steadycam.MadgwickAHRS(angularAcceleration,linearAcceleration,beta,interval);
		antidrift.MadgwickAHRS(angularAcceleration,linearAcceleration,beta2,interval);

@m-7761
Copy link
Author

m-7761 commented Oct 20, 2018

BTW: Since you downloaded, here (http://csv.swordofmoonlight.net/SomEx.dll/1.2.2.8.zip) is new version of the main DLL in the EX/SYSTEM folder that adds dedicated support for Sony's DualShock4 drivers. The TXT file in the PSVR folder describes how to turn on stereo mode, after turning on the PSVRToolbox provided. It automatically switches between the set's home-theater and pass-through modes.

@m-7761
Copy link
Author

m-7761 commented Oct 26, 2018

Sorry to continue to add to this, but just FYI for readers, I've edited a comment on FOV without PSVR into the top post, since I realized the default setting is pretty unpleasant. (The early demo is that way to approximate the PlayStation game but is missing something.)

@dylanmckay
Copy link
Owner

dylanmckay commented Nov 2, 2018

Here's a summary for any readers.

@mick-p1982's description of the head tracking algorithm:

In the meantime I quickly developed a hybrid system that blends two side-by-side Madgwick integrators (mainly copied from existing sources) so that one is optimized for its anti-drift quality, and the other for its picture stability, so that when your head isn't moving (imperceptible movements) the more drifty/stable position is shown, until that is, it drifts too far from the other's better estimate. At which point some fudging is required to reel it in to the orbit of the more chaotic/correct approximation.

Here's the snippet of code (from som.mocap.cpp) that sets up the integrators.

//TODO: TRY <double> BUT FLOAT SEEMS MORE STABLE, DRIFTWISE
//IT'S HARD TO SAY THOUGH, AND AS TO WHY IS A MYSTERY IF SO
Madgwick<float> antidrift;
Madgwick<scalar2> steadycam;
som_mocap_BMI055Integrator()
:antidrift(/*0.1f*/0.125f) //RATIONAL FRACTION
,steadycam(/*0.05*/0.035)

Here's the beta values for the two Madwick integrators used. Anecdotal evidence says that using 32-bit float values for the anti-drift calculations and 64-bit doubles for the picture steadying calculations.

Integrator Suggested data type Beta
Anti-drift float 0.125
Picture steadying double 0.035

I can't quite tell how the rotation quaternions are combined yet, one [untested] idea of mine is interpolating between the two rotation quaternions produced by the two Madgwick filters at the midpoint between the two, like how you can interpolate a line from two points and then choose the midpoint to find a point in the center of the two.

@dylanmckay
Copy link
Owner

Sorry to continue to add to this

I'm the only one getting emails, but I find this really interesting! I struggled to get good motion tracking in my program, your solution sounds like a very novel and easy way to get it working well.

@m-7761
Copy link
Author

m-7761 commented Nov 2, 2018

I fixed a few (strange) typos in my earlier post. Right now my PSVR is being serviced under warranty for debris that is loose behind the lenses, maybe some adhesive broke free inside the housing. Otherwise I might be working with cameras/PSVRTracker.

I thought I left it on a value even less than 0.035, but maybe I Ctrl+Z that away accidentally. Truth is these values can be pushed higher/lower as much as is comfortable as long as the experience doesn't seem to get worse. I know at some point too high of a value (more than 0.1) will stop working altogether. I've pushed it to 0.125 in the hope that more aggressive is even better, however there may be something magical about 0.1, but 0.125 is not bad. (EDITED: 0.125 is a rational number, which might be more stable since floating-point can represent it absolutely. It's just hocus pocus, but that's why I chose it.)

What this setup buys you is a lot more freedom to explore values for the beta factors, since it decouples what you see from the drift fighting aspect of the fusion algorithm. I think 0.035 and 0.1 are good starting points.

The reason for float is using double (and feeling out constants to all significant digits) is very smooth, but I worry that somehow the very small rounding errors slowly creep into the system, whereas with float the values are all truncated, and the system is inherently chaotic (it's always bouncing around all over the place) and that maybe this actually improves things.... I say "maybe" only because I've only tried double briefly, and it seemed to drift a little more, but it could just be my imagination, whereas I've spent a lot of time with float and know what to expect from it... double needs more testing/I cannot endorse it IOW. double certainly looks better, and that is why it's used for the picture.

I can't quite tell how the rotation quaternions are combined yet, one [untested] idea of mine is interpolating between the two rotation quaternions produced by the two Madgwick filters at the midpoint between the two, like how you can interpolate a line from two points and then choose the midpoint to find a point in the center of the two.

A basic description is you measure the angle between the two quaternions and do a "slerp" that pulls A (0.035) toward B (0.125) by a degree, "t," that is more when the angle is higher than when it is lower. After that, it's just a matter of deciding on how to arrive at "t." I think the example code uses an exponential function and some maximum angle to arrive at "t." So the basic idea is very simple, and the devil is in how to parameterize "t."

This process is interpolation, as you say, except it's moving the points in rotation space, interpolating rotations, and if you did as you suggest, you would set "t" to 0.5 under all circumstances. That should give you an idea of what is going on, but this would not be a good approach, because either the picture would always be pulled closer, and so 0.5 is always moving ever closer to the chaotic center of this binary system (and so the picture becomes chaotic) or it is not pulled, and so the picture drifts away from the orbit of its chaotic star so-to-speak until they are a world apart, and your picture is between them (in rotational terms, since we are speaking about orientation.)

@dylanmckay
Copy link
Owner

Well put, I think that answers all my questions around it.

Next time I start working on my VR project again I'll implement it and report back.

dylanmckay added a commit to dylanmckay/hmdee that referenced this issue Nov 22, 2018
This implements a new sensor fusion algorithm, proposed by @mick-p1982,
that combines two independent Madgwick filters into one by a Quaternion
SLERP.

The algorithm is described here:
dylanmckay/psvr-protocol#14

One of the filters acts as a rotation stabilizer. The other filter has a
beta value optimized for anti-drift.

Combining the two leads to better head tracking.
@dylanmckay
Copy link
Owner

Here's my implementation

dylanmckay/hmdee@30613bc

I've left t=0.5 for now but will calculate it properly in the future.

@m-7761
Copy link
Author

m-7761 commented Nov 23, 2018

Hey, I finally got my set back last night (it took 3wks in all, because I left the cables in town, since Sony didn't want them) and just FYI it feels like it's more drifty than I remember, especially at the beginning, so I need to look into it. I think probably it should be scaled back to 0.1. I think the unit/box I got back is not the same ones I sent in for servicing.

It's also much colder now, but I am still unconvinced temperature is a major factor for the sensors.

P.S. It seems to me like the resting values have a little bias in them. But it's hard to tell if it's imaginary or not. It's hard to explain what I mean: it's like the roll doesn't drift away, but it sometimes feels like it's stuck where up is a little bit off, but it's really hard to tell with the head set on if your head is upright or not, because you have this weight on it, and maybe your neck muscles are just a asymmetrical. I find what helps is to move my head more often and in a more exaggerated way, so that the sense of being a tad bit crooked cannot take root. In any event, I'm contemplating adding a static offset/calibration to see if it can be made more comfortable.

BTW: In a few days I'm going to be updating that demo. But while it will be a nice update, the VR elements are not going to change. I've been eager to look at camera work. But I think a more interesting area to pursue is color correction, which could use a more methodical approach than I've so far afforded it. The color is quite good in the updated demo, but mainly because I've figured out how to make the game appear identical to the original (there is a pretty complex color transformation function, and I've since figured out the 3-point+ambient lighting through observation. It took hours over many evenings to work it all out by eye!)

@m-7761
Copy link
Author

m-7761 commented Nov 25, 2018

Hey again, it looks like sticking to 0.1 is best.

Right now I'm concerned about some code in the PSVRFramework that I don't understand (https://github.com/gusmanb/PSVRFramework/blob/master/PSVRFramework/BMI055Integrator.cs) that I worry is unsound. The gravityVector part I can't understand, or justify. I looked at your hmdee code to see if it did calibration this way, and could not find anything.

I looked at other code, and could not either. 2 or 3 weeks ago the TrinusPSVR person asked me to take on a job for them to implement this feature, which I turned down, offering to answer questions instead. But I wrote them again just now, because I realized their code is not open-source, and so maybe it would be worth it to help them to be able to look into its code.

The value for accelOffset in that code, after calibration is generally small. I can't see a difference if I set it to 0,0,0. So I think it may just not be registering as a problem, since the Madgwick algorithm seems to do well with or without it. The logical thing to do (to my mind) would be to remove the influence of the gravity based acceleration, by subtracting it from the readings. But that degrades tracking considerably. Adding (which seems strange, but is what this code is effectively doing, although I think the value tends to be negative) looks fine, until you start looking up/down, at which point it drags back to the middle.

@m-7761
Copy link
Author

m-7761 commented Nov 26, 2018

One last follow up!

I think I realized the gravityVector code uses the normalized vector because the units are 1 equals 1G of gravity. I think that is the kind of thing that should be commented on in code :)

Still, the more I thought about it, I realized it is not really a good thing, because the set must be upright, and nearly perfectly level to get a good calibration, to find out if the acceleration reports are biased. I think it's probably safer to assume they are not biased, or to do the calibration in a controlled way, offline.

I don't know if the gyroscope calibration is worthwhile or not. I guess maybe it is independent of orientation, but I don't actually know for certain. What I do know, is it seems unnecessary, and is more likely to add false correction, as a result of something moving during the sampling period. Since I like to just grab the thing and throw it on, and recenter it after its on (it's practically impossible to put it on first, and do anything!) I've just disabled this stuff in my code.

Normalizing to get the gravity vector doesn't even make sense, since 1G can depend on elevation, and maybe even latitude/longitude, and it's not safe to assume that the main force is gravity, if the whole purpose is to eliminate bias, since bias will contribute to the direction of the main force. So I'm very skeptical of this business.

ON ANOTHER TOPIC: I was able to do the test with the manual realignment idea. There seems to be something to it, even though I've no clue why it might be necessary. I came up with these values from my INI file:

[Stereo]
head = 0 -0.055 0.0128

This is a tricky subject, because you'd be tempted to center the picture (reset "drift" to 0,0,0) and go by what you see. But that's not what this is. This is how drift accumulates, and where it seems to settle. If it seems to settle in a way that is asymmetric. Then that is when these values are a help...

Fortunately, I've found that it's pretty simple to determine the values, by centering, and after looking pretty forcefully, left-and-right, or for roll, rolling your head, left-and-right. If these values are well tuned, then the result of all of this forceful movements, should be that the picture returns to the middle when looking forward, or is upright when not rolling your head.

It's hocus pocus. But seems to work. These are not minor adjustments. They are quite significant. Combined with this, it helps to move your head more if you are playing a game. Something about movement cancels out the drift. It reels it back to an approximate value. It's kind of interesting that it doesn't just drift away forever. If you are seeing drift, shaking your head can shake it out. It helps to have a static overlay to compare 3D things to.

Fortunately, I have a menu system that is not yet moved in response to the VR set to work with. It probably should move because it doesn't really fit into the view finder, but other than that, I think it works fine to have a static element on the screen. It would totally work if the resolution were higher.

EDITED: Those 3 values are radians around the X Y Z axis. X is pitch, Y is yaw, for the record. I apply them with 3 successive quaternions around each axis. BTW, the upright gravity vector is approximately 1,0,0 meaning the X part of the accelerometer report is up/down. I guess positive is down. (I'm not 100% certain it's correct to apply the corrections in succession like that, but it's fine when looking forward, i.e. the only time you notice drift irregularity. Pitch seems to work best with 0. It may not even drift at all.)

@dylanmckay
Copy link
Owner

Someone's bringing a USB logic analyzer into work tomorrow, gonna see if I can capture some PSVR packets. Hopefully I can get some raw bytes out of the waveforms, I am a newcomer to the gadget.

@m-7761
Copy link
Author

m-7761 commented Dec 19, 2018

Since I got a replacement/repair from Sony it does a thing in home-theater mode where if I look too far to the edge of the screen (turn my head far) it recenters the screen. It's kind of annoying. It's either configurable, or maybe the firmware is upgraded/different. (EDITED: I think likely the box doesn't have memory, and if it's configurable, then it's a new feature that treats "0" as enable the feature, and maybe there is a way to disable it if so.)

I recently looked into skewing the matrix, or anything I can figure out about calibrating for sense of depth. It really wasn't a fruitful conversation:

https://www.reddit.com/r/learnVRdev/comments/a4153r/skew_matrix_beneficial_or_detrimental_projection/
https://www.reddit.com/r/vrdev/comments/a418sy/skew_matrix_beneficial_or_detrimental_projection/
(I think the responder here is someone who maintains freeglut.)

My take away is that probably it's appropriate to skew if the pupil is off-center. But I have no notion of how much skew to use if so. I felt like I exhausted my possibilities, and decided that even though looking at the ground looks like I'm crawling on it virtually, that if I close one eye and vice-versa, the parallax difference in each eye looks like less than in real life; which suggest to me that if anything the ground should appear further away instead of closer, and so either it may be the "accommodation" effect, which cannot be solved, or it might just help if there was a body, with feet and legs for example, to help to gauge distance... but it seems to me like while that might help, that I don't see why a more-or-less flat plane serving as a floor would appear to be extremely closer than it ought. I also get a sense that walking and jumping/falling is foreshortened explained in the link.

Something else in the link is it's explained that the PC products advertise factory tolerance measurements in their HID data. It seems to me that the PSVR would do this via USB also. So that's something to look out for in the USB data.

P.S. I'm trying to claw my way to working with cameras. Right now I'm trying to complete a feature to add proper XML namespace support to my COLLADA-DOM work on Sourceforge. It's a total overhaul of the original COLLADA library developed at Sony, then by Khronos. XML is really complicated, which is why I think it didn't get very far as a standard. XML Schema in C++ is really something else. Each element has its own class generated from the schema, that acts like a heterogeneous container. It's actually a futuristic approach to data I think. Data is normally just part of code, but that is actually a big waste of time, because it doesn't really belong there. Data is "what", code should be "how", ideally.

@m-7761
Copy link
Author

m-7761 commented Mar 3, 2019

Someone's bringing a USB logic analyzer into work tomorrow, gonna see if I can capture some PSVR packets. Hopefully I can get some raw bytes out of the waveforms, I am a newcomer to the gadget.

How did this go? I'm trying to get into camera work right now. I feel like the PSVTracker project is intentionally avoiding me. I don't know. https://github.com/HipsterSloth/PSVRTracker/issues

(I got tied up with that "COLLADA-DOM" feature work for about a month and half longer than I would have liked. I'm scrambling now to make good on things I had to set aside.)

P.S. I happened across this (https://www.reddit.com/r/PSVRHack/comments/ajkciz/trinus_psvr_update_nolo_fix_and_more/) a little while ago. There is a simple SDK for a product called Nolo that works with PSVR. It looks like one way to add a tracker and hand controllers to PSVR. The big VR companies don't have straightforward SDKs as near as I can tell. So it seems like it might make more sense to wait for OpenXR to sort that mess out.

EDITED: Nolo is probably Windows only: https://www.nolovr.com/hardwareDevice

@dylanmckay
Copy link
Owner

Someone's bringing a USB logic analyzer into work tomorrow, gonna see if I can capture some PSVR packets. Hopefully I can get some raw bytes out of the waveforms, I am a newcomer to the gadget.

How did this go? I'm trying to get into camera work right now. I feel like the PSVTracker project is intentionally avoiding me. I don't know. https://github.com/HipsterSloth/PSVRTracker/issues

Not well - it wasn't a USB logic analyzer but instead an oscilloscope that connects via USB.

@m-7761
Copy link
Author

m-7761 commented Apr 1, 2019

Just following up " I feel like the PSVTracker" (I think I already said so in private mail) we are actually collaborating a lot together now, and they are very cool. Speaking of Nolo, I now have a Nolo set through them, that arrived just yesterday, that happens to be my birthday. I'm confident we will figure out the tracking problem.

Lately I've been developing a UI framework (https://sourceforge.net/projects/widgets-95/) for various projects to use, including the COLLADA-DOM reference implementation of COLLADA I've involved myself with. The PSVRTracker project is more of test bed, but I'm not sure, I think it may end up being its own thing separate from PSMoveService too. There's also a Steam variant https://github.com/HipsterSloth/PSMoveSteamVRBridge here, that includes move-controllers, and is more oriented to the controllers.

I got to look at the sensor parts of Trinus PSVR that are very similar to PSVRToolbox. I thought they would share more. They made the same mistake I did, about thinking there were two sensors instead of two samples per report. I haven't heard back from them since I informed them about this. More resources is important. Also, preliminary OpenXR materials were published last week, or the week prior.

I've kind of made a fool of myself driving the conversation I'm afraid. I was so busy these weeks I didn't take the time to read the documents: https://community.khronos.org/t/feedback-thread-khronos-releases-openxr-0-90-provisional-specification/103756

Today there's a suggestion that we could develop an OpenXR "runtime" for the PSVR. It's somewhat different from traditional drivers since a lot of VR components are just USB devices. I very well may end up doing that with my PSVRTracker colleague too if they involve themself. If Sony doesn't mind, or cannot intercede.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants