Sign in to follow this  

Projection matrix model of the HTC Vive

This topic is 397 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello everyone,

 

I am implementing support for the HTC Vive in an application that was designed to render stereo images for CAVE environment (is, anaglyph or with stereo glasses).

 

To render with the Vive on OpenGl, it's very easy to get the projection matrix for each eye, however, the app I am working on requires me to use different parameters, namely the Fov, the image ratio/size, and a parameter called convergence distance or sometimes focal length, which is usually the distance from the viewer to the wall in a cave. This is a standard model for stereo rendering, and more details can be found here for example:

 

http://paulbourke.net/stereographics/stereorender/

 

See sections about the "Off-axis" model

 

I was trying to match the Vive with such model but until now without success.

Do you guys know which model is used to compute the proj matrix of the vive?

Especially, if you look at the matrix (IVRSystem::GetProjectionMatrix), we can see that it is not symmetric:

 

left projection
+        [0]    0x000000000039b270 {0.756570876, 0.000000000, -0.0577721484, 0.000000000}    float[4]
+        [1]    0x000000000039b280 {0.000000000, 0.680800676, -0.00646502757, 0.000000000}    float[4]
+        [2]    0x000000000039b290 {0.000000000, 0.000000000, -1.01010108, -0.101010107}    float[4]
+        [3]    0x000000000039b2a0 {0.000000000, 0.000000000, -1.00000000, 0.000000000}    float[4]

 

This can be also seen with IVRSystem::GetProjectionRaw, which gives for the left eye:

 

left          -1.39811385   

right        1.24539280   
bottom    1.45936251   
top          -1.47835493   
 

Which puzzles me here is that in absolute, left is bigger than right, which indicates that the frustum for the left eye is oriented towards the left, and not towards the center of sight (right of left eye) as it would be expected looking at the model described by Paul Bourke.

 

Any help, experience would be greatly appreciated.

Thanks

Edited by olive

Share this post


Link to post
Share on other sites

I asked a colleague of mine which works in the VR group here at Valve and attach his reply below. Note that is probably better to ask those question on the official forums.

 

 

GetProjectionRaw is the half tan angles from center.  The lenses in the Vive are off-center, and the projection matrix reflects this.  A larger left value here means that the angle from the center to the left edge of the frustum is larger than the angle from the center to the right edge.  This means the center is shifted closer to the right edge, or more generally, the center of the lenses are closer toward the inside or each other.  This provides more peripheral fov while still given sufficient stereo overlap.

 

I'm not sure how CAVE systems use focal length in their projection matrix.  I could see maybe using it for orienting the cameras so they converge at that distance.  We always rendering our cameras parallel, so they converge at infinity (i.e. they don't converge).

 

If you want an example of how to convert the raw projection values to fov and aspect, look here:

https://github.com/ValveSoftware/openvr/blob/master/unity_package/Assets/SteamVR/Scripts/SteamVR.cs#L283

 

This creates a symmetric frustum that's the maximum needed by either eye.  It also calculates the subrect to crop each of those resulting images to in order to fit back to the original off-center projection matrixes (used when submitting the rendered images to be displayed in the headset).

Edited by Dirk Gregorius

Share this post


Link to post
Share on other sites

Hi Dirk,

 

Thanks a lot!

 

This makes much sense. In the meantime, I found a blog post detailing this for the Oculus https://rifty-business.blogspot.de/2013/10/understanding-matrix-transformations.html?m=1

It's not the Vive but the principle is the same, I post it here for reference as the diagrams are useful for understanding.

 

As for CAVE, they also have parallel view direction, but the frustum are asymmetric in the other direction. There is more frustum space between the eye, and this is required to correctly project (with a beamer) both images on the same wall, so that they are create a perceived parallax when observed from the tracked head viewpoint.

 

This means for me that the legacy code for CAVE stereo is not useful, but that's another story. :/

Share this post


Link to post
Share on other sites

This topic is 397 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this