View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# View and Projection Matrices for VR Window using Head Tracking

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

10 replies to this topic

### #1Hyunkel  Members

Posted 13 March 2011 - 03:34 PM

To illustrate what I am trying to do:

Johnny Chung Lee (who uploaded this video) used the Wii mote for head tracking, I want to do the same thing with a Kinect & OpenNI.
However, my problem isn't related to the kinect or head tracking...

My problem is that I'm having a hard time to wrap my head around setting up correct View and Projection matrices to look into a tunnel as shown in the youtube video.
Here's an example:

Right now I just want to get this step to work, so I'm using keyboard input to move my camera around.
It doesn't make much sense to do any head tracking before I got this working.

My problem is figuring out the correct View & Projection matrices for an arbitrary camera position in front of the TV screen, so the camera actually looks "through the screen".
Here is what I tried:

//Position = Camera Position
View = Matrix.CreateLookAt(Position, new Vector3(Position.X, Position.Y, 0), Vector3.Up);

//Create a perspective off center
//The tunnel model is 16f wide and 9f high (My TV is 16:9, resolution is 1920x1080)
//near and far planes are 0.05f & 500f
float left = nearPlane *    (-16f + Position.X) / Position.Z;
float right = nearPlane *   (16f + Position.X) / Position.Z;
float bottom = nearPlane *  (-9f - Position.Y) / Position.Z;
float top = nearPlane * 	(9f - Position.Y) / Position.Z;
Projection = Matrix.CreatePerspectiveOffCenter(left, right, bottom, top, nearPlane, farPlane);

Oddly enough this seems to work on the Y axis, but not on the X axis?
If I move the camera up or down I get the wanted effect.
However if I move it left or right it just scrolls past the edge of the tunnel (the perspective does not change). :/
I think I'm missing something here, but I can't figure out what it is.

Any ideas / suggestions what I could try?

Cheers,
Hyu

### #2Jason Z  Members

Posted 13 March 2011 - 11:54 PM

Can you post some screenshots of what you are seeing? It is hard to visualize what the problem is...

Jason Zink :: DirectX MVP

Direct3D 11 engine on CodePlex: Hieroglyph 3

Games: Lunar Rift

### #3Hyunkel  Members

Posted 14 March 2011 - 08:11 AM

This should visualize the problem more easily:

The center of the virtual tv screen is at 0, 0, 0 and it measures 16 by 9 units
The camera coordinate is displayed at the top left of each screenshot:

Centered Camera:

Camera offset to bottom: (This is the correct effect, it properly changes the projection)

Camera offset to left: (This behaves incorrectly. It does not seem to change the projection, so the camera no longer looks through the virtual screen)

Cheers,
Hyu

### #4Hyunkel  Members

Posted 14 March 2011 - 08:32 AM

I actually figured it out...
The correct near view plane should be calculated using:

float left = nearPlane *    (-16f - Position.X) / Position.Z;
float right = nearPlane *   (16f - Position.X) / Position.Z;
float bottom = nearPlane *  (-9f - Position.Y) / Position.Z;
float top = nearPlane *     (9f - Position.Y) / Position.Z;

and not

float left = nearPlane *    (-16f + Position.X) / Position.Z;
float right = nearPlane *   (16f + Position.X) / Position.Z;
float bottom = nearPlane *  (-9f - Position.Y) / Position.Z;
float top = nearPlane * 	(9f - Position.Y) / Position.Z;

I got confused with the different coordinates.

### #5stownend  Members

Posted 24 January 2012 - 10:45 AM

Hi Hyu,

I have been struggling with the issue of head tracking and the associated projection matrix for few days now. Your posting has really helped. I am using Microsoft's own Kinect SDK but, like you, am using the keyboard to simulate head movement until I can get the viewing correct.

Here are some basic facts about my box/room:-
• I am using units of 1M as thats what Kinect reports
• My box/room is 0.38 * 0.3 as that is my screen size in Meters
I am having difficulty deciding on a few things:-
• Where to place the centre of the open-end of the box/room in the world. I have tried it at 0,0,0
• Assuming the above, where to place the camera - I have it at 0.34
This is what I see when the game first loads:-

As you can see it does not fill the screen. If I the move the camera to the right, the perspective seems to change correctly and the open face of the room stays put:-

Moving the camera in Z merely alters the perceived depth of the room, it does not change its position:-

If I move the model so that it fills more of the screen (eg. Z=0.08) then move the camera in any direction, the image perspective is ok but the open face of the room has moved to the left as it appears to be rotating the room around world (0,0,0):-

Have you any idea where I am going wrong? It feels so close.
• I would like to fill the screen with the open face of the room
• Once in position, I want camera movements to affect the perspective of the view but leave the open face in place
• I would expect Z movement of the camera would affect perspective but would not zoom the image. Does that make sense and how did you achieve it?
• Would your camera position directly correlate with the head position as reported by Kinect or would you be doing some manipulation to keep the distance from head to screen center constant (some kind of orbital camera movement)?
Here is my draw code in case I have something wrong there:-

			matView = Matrix.CreateLookAt(cameraPosition, new Vector3(cameraPosition.X, cameraPosition.Y, 0.0f), Vector3.Up);

//Create a perspective off center
//near and far planes are 0.05f &amp; 500f
float nearPlane = 0.05f;
float farPlane = 500f;
float left = nearPlane * (-.38f - cameraPosition.X) / cameraPosition.Z;
float right = nearPlane * (.38f - cameraPosition.X) / cameraPosition.Z;
float bottom = nearPlane * (-.3f - cameraPosition.Y) / cameraPosition.Z;
float top = nearPlane * (.3f - cameraPosition.Y) / cameraPosition.Z;

matProjection = Matrix.CreatePerspectiveOffCenter(left, right, bottom, top, nearPlane, farPlane);

// Draw the model. A model can have multiple meshes, so loop.
foreach (ModelMesh mesh in myModel.Meshes)
{
// This is where the mesh orientation is set, as well as our camera and projection.
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(0.0f)
* Matrix.CreateTranslation(modelPosition);
effect.View = matView;
effect.Projection = matProjection;
}
// Draw the mesh, using the effects set above.
mesh.Draw();
}


I would be very interested to see the results that you achieved or the source code. I will gladly share mine once I am done.

Regards,
Steve

### #6kinect_dev  Members

Posted 30 January 2012 - 01:57 AM

I've been working through the head tracking projection problem recently as well. One thing I noticed is that some of the online samples use DirectX's LookAtLH and PerspectiveOffCenterLH. These are left-handed versions of the common way to create a View and Projection matrix. When using XNA, however, there are not left-handed methods available. CreateLookAt and CreatePerspectiveOffCenter are right-handed versions of those methods. I'm not totally sure how to convert from one to another; it's possible that multiplying the z-coordinate by -1 might do the trick.

### #7Hyunkel  Members

Posted 30 January 2012 - 06:42 AM

Unfortunately I can't seem to find the code I've written back when I posted this.

However I seem to recall most of the problems you are having.

As you can see it does not fill the screen. If I the move the camera to the right, the perspective seems to change correctly and the open face of the room stays put:-

If you want the box to fill the screen you have to align the perspective matrix with the size of your model.
For my tunnel I used a model that was 16.0f by 9.0f (my tv is 16/9)
So in order for it to fill up the screen I have to do a few things:

- Make sure the model origin (0, 0, 0) of the "tunnel in VR world" model is at the center of tv screen
- Position the "tunnel in VR world" model at (0, 0, 0)
- Make sure the top, right, bottom, left, top planes match the size of the model.
If your model is .38f by 3f then your calculations should be correct.

Moving the camera in Z merely alters the perceived depth of the room, it does not change its position:-

If your goal is to create some sort of VR window, then this behavior is correct.

I know that it looks weird, especially if all you have on display is an empty box.

If you add more objects to the scene, or control camera movement with the kinect,

you will notice that the effect is actually correct.

If I move the model so that it fills more of the screen (eg. Z=0.08) then move the camera in any direction, the image perspective is ok but the open face of the room has moved to the left as it appears to be rotating the room around world (0,0,0):-

You should not move the model at all!

In order to get the wanted behavior you have to line up your perspective matrix with the size of your model.

The model always stays at (0, 0, 0), the only thing that changes is your camera position.

Would your camera position directly correlate with the head position as reported by Kinect or would you be doing some manipulation to keep the distance from head to screen center constant (some kind of orbital camera movement)?

Well, here's where things get tricky.

The camera position does directly correlate with the head position, yes. (no orbital camera movement or anything like that)

You will get decent results doing this, but you will notice that something is off.

This is because the kinect camera cannot be positioned at the center of your screen.

You either have to put it below or above it, which causes the coordinate systems to no longer match up.

You can fix this by applying an offset to the camera position, which will equal the distance between the position of the

kinect camera and the center of your screen.

However this is still not 100% correct (but very close!)

The last issue is that your kinect camera is most likely angled upwards or downwards.

To get perfect results you have to take that into account as well.

Here's a video I took while I was working on it:

As you can see the effect is not entirely correct, but that is because I couldn't put the camera in front of my head.
When I did, the kinect would no longer track my head properly, so I had to put it in front of my chest.

If you can't seem to get it to work properly I can have a look at your vs project if you want me to.

### #8kinect_dev  Members

Posted 31 January 2012 - 11:45 AM

These are awesome points. Thank you for posting these. I am not the original poster but I'm trying to do a similar thing and your answers have been very helpful.

### #9stownend  Members

Posted 31 January 2012 - 07:12 PM

Hi Hyu,

Success! I had already been working through most of the issues that you mentioned (eg. Kinect not being at screen centre). I actually place my Kinect across the room, at my rght hand side. That allows it to track me right up to the screen. I obviously had to swap around the x,y,z values and incorporate the various offsets but the end result was good.

Regarding my main issue of the model appearing approx half as big as it should, I originally got around this by scaling my model by a factor of 2. This seemed ok until I was into the fine-tuning stage. Then I noticed that I had to move me head twice as far in x or y in order to get a specific view on the screen. I noticed this when trying to look straight down the edge of the room - I should have had my head in line with the edge of the screen for this but had to have it half as far away again. This was clearly related to the scaling that I had done to the model.

After a lot of head scratching and scribbled calculations I was struggling until I looked at the following document very closely (http://www.cs.ubc.ca/labs/imager/PROCAMS2011/0008.pdf). In section 5.2 they present a formula for the projection matrix that is so similar to the one that you presented above. The key difference is that they were using a half of the screen/model width and a half of the screen/model height. When I modified your projection matrix I found that I no longer needed to scale my model by 2 and the viewing angles were now correct when trying to look down the edge of the room/screen.

So, using my screen of 0.38M wide by 0.3M high with a model of the same size, my projection matrix becomes:-
		    left = nearPlane * ((-.38f / 2) - cameraPosition.X) / cameraPosition.Z;
right = nearPlane * ((.38f / 2) - cameraPosition.X) / cameraPosition.Z;
bottom = nearPlane * ((-.3f / 2) - cameraPosition.Y) / cameraPosition.Z;
top = nearPlane * ((.3f /2 ) - cameraPosition.Y) / cameraPosition.Z;


Thanks for all your help. All I need to do now is do some decent models with textures and my demo will be complete. I will try to remember to post a video when it is finished.

Steve

### #10Hyunkel  Members

Posted 01 February 2012 - 04:44 AM

Glad to hear you got it working!

Positioning the kinect to your side is a pretty cool idea when doing head tracking.

### #11rakeshkr590  Members

Posted 09 July 2012 - 07:54 AM

Hi ,
i am having the same projection problems too , Problem is i am doing a aircraft simulator which needs three monitor set-up using projectors so i have to render my scene with three different projections to simulate the view of the airport but couldn't quite able to do and struggling with it since past 2 day's could you guys please help me out with this .
thank you . my problem and how i want my projections is explained in the picture below .

Edited by rakeshkr590, 09 July 2012 - 07:55 AM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.