Projectors and Perspective Matrices
Members - Reputation: 100
Posted 13 March 2012 - 04:17 PM
I've been working on a game over the last week which uses a projector to project little cars onto a table. Then it uses the Kinect to sense stuff on the table, which it then turns into terrain for the game. The effect is really simple, and pretty cool! Here are some pics:
However, there is a visual problem with the projection perspective maths which I would really love to solve. If I understand correctly, the maths should be pretty simple for someone who knows how. :-)
In the pic below you can see that as the height of a virtual object in the scene increases (the vertex pos are written as y) then the more "compressed" it becomes due to the shrinking frustrum of the projector.
What I would like to do, is calculate a perspective matrix, takes account of this and ensures that vertices with a higher "y" value are projected in the right place given the frustrum of the projector. Unfortuanately, I lack the maths skills needed to ask properly, so I hope my picture will do instead!
Here is an annotated pic which shows the problem in the real world:
Any help you can give me on how to construct that perspective matrix would be very much appreciated!
Moderators - Reputation: 10550
Posted 17 March 2012 - 07:37 PM
Really cool project, btw!
Crossbones+ - Reputation: 12397
Posted 17 March 2012 - 08:17 PM
Crossbones+ - Reputation: 3015
Posted 19 March 2012 - 02:41 AM
For multiple flat surfaces, marking the edges might work, but the white paper sheet in the background... I'm fairly scared. And that requires all the marks to be visible.
Members - Reputation: 100
Posted 19 March 2012 - 02:02 PM
Yup! The problem is that light from a projector spreads out from the lens, widening until it reaches the projection surface. On a big flat surface like a wall this is fine. However, when something is raised up on the table the light doesn't have chance to spread out fully. This means that the wrong part of the image is shown on the raised bit.
I am not sure I understand what the problem is. Presumably your virtual scene contains the same ramps as you have in the real scene. You render from the point of view of the projector and then project. What part of this doesn't work?
I'm not so worried about the arbituary surface stuff, because, after a bit of processing, the Kinect sensor gives me (more or less) the topology of the table as a mesh.
The reason I thought of using a perspective matrix was that then, in a vertex shader, the vertices of this mesh can then be transformed into screenspace (out.Pos = in.Pos * W * V * P) based on where they intersect the frustrum of the projector.
The trick would just be modifying P (the perspective matrix) to apply a scaling based on the 'y' value of each vertex (probably with some other linear coefficient + offset) to move it in x,z (assuming y is looking into the screen).
So its not designed to be perfect, but at least allow me to manually correct for the the throw of the projector, which is a fixed source of error. :-) I guess I just need to get deep into the anatomy of a perspective matrix to see what to change.
This is because the surface you're projecting to isn't flat, and thus you need to need to account for the varying height at each different pixel that you render.
ood point. I was hoping that by using a vertex shader rather than a pixel shader, the rasteriser would correct for that when it interpolates the vertices before sending them to the pixel shader? That would save me trying write a pixel shader which "compressed" or "expanded" the UV coordinates based on height (and possibly encountering some akward z-ordering issues?)
I think that because the Kinect gives me the surface topology that is half the battle and I shouldn't need any markers since the transform between what the Kinect sees and how the game renders it is affine (i.e. straight lines remain straight, just in a different location before and after the transform - for those who don't want to wiki ;-) ). I like the idea of computing the matrix using the markers though.. I think that because the projector params don't change and the surface topology is known it would only ever need to be done once per projector (assuming nobody fiddles with the zoom!) hmm... lol it would be looking to give me a 4x4 non-affine homography matrix or something like that?
Maybe using OpenAR to recognize a special mark would possibly allow to obtain a transform matrix but as MJP noticed, doing this for arbitrary surfaces is going to be a quite a difficult problem.