Projectors and Perspective Matrices

Started by
6 comments, last by _JohnHardy 12 years, 1 month ago
Hi all,

I've been working on a game over the last week which uses a projector to project little cars onto a table. Then it uses the Kinect to sense stuff on the table, which it then turns into terrain for the game. The effect is really simple, and pretty cool! Here are some pics:

http://goo.gl/nsy1w

However, there is a visual problem with the projection perspective maths which I would really love to solve. If I understand correctly, the maths should be pretty simple for someone who knows how. :-)

In the pic below you can see that as the height of a virtual object in the scene increases (the vertex pos are written as y) then the more "compressed" it becomes due to the shrinking frustrum of the projector.

ry1mZl.jpg

What I would like to do, is calculate a perspective matrix, takes account of this and ensures that vertices with a higher "y" value are projected in the right place given the frustrum of the projector. Unfortuanately, I lack the maths skills needed to ask properly, so I hope my picture will do instead!

Here is an annotated pic which shows the problem in the real world:

heiaql.jpg



Any help you can give me on how to construct that perspective matrix would be very much appreciated!

Cheers!


John
Advertisement
I have no idea but I have to comment on how awesome that is!
Haha cheers! :)
Any ideas folks?

John
I don't think this is something you can fix just by choosing a proper perspective projection. This is because the surface you're projecting to isn't flat, and thus you need to need to account for the varying height at each different pixel that you render. I don't think it would be too hard to do with a fullscreen pass that calculates new UV's with an approach similar to parallax mapping, but I'll have to think about it some more to come up with an exact solution.

Really cool project, btw!
I am not sure I understand what the problem is. Presumably your virtual scene contains the same ramps as you have in the real scene. You render from the point of view of the projector and then project. What part of this doesn't work?
Maybe using OpenAR to recognize a special mark would possibly allow to obtain a transform matrix but as MJP noticed, doing this for arbitrary surfaces is going to be a quite a difficult problem.

For multiple flat surfaces, marking the edges might work, but the white paper sheet in the background... I'm fairly scared. And that requires all the marks to be visible.

Previously "Krohm"

Thanks for the feedback. :-)

[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

I am not sure I understand what the problem is. Presumably your virtual scene contains the same ramps as you have in the real scene. You render from the point of view of the projector and then project. What part of this doesn't work?

[/font]
[/quote]
Yup! The problem is that light from a projector spreads out from the lens, widening until it reaches the projection surface. On a big flat surface like a wall this is fine. :) However, when something is raised up on the table the light doesn't have chance to spread out fully. This means that the wrong part of the image is shown on the raised bit.

I'm not so worried about the arbituary surface stuff, because, after a bit of processing, the Kinect sensor gives me (more or less) the topology of the table as a mesh.

The reason I thought of using a perspective matrix was that then, in a vertex shader, the vertices of this mesh can then be transformed into screenspace (out.Pos = in.Pos * W * V * P) based on where they intersect the frustrum of the projector.

The trick would just be modifying P (the perspective matrix) to apply a scaling based on the 'y' value of each vertex (probably with some other linear coefficient + offset) to move it in x,z (assuming y is looking into the screen).

So its not designed to be perfect, but at least allow me to manually correct for the the throw of the projector, which is a fixed source of error. :-) I guess I just need to get deep into the anatomy of a perspective matrix to see what to change.

mg8zql.jpg




[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

This is because the surface you're projecting to isn't flat, and thus you need to need to account for the varying height at each different pixel that you render.

[/font]

[/quote]


G

ood point. I was hoping that by using a vertex shader rather than a pixel shader, the rasteriser would correct for that when it interpolates the vertices before sending them to the pixel shader? That would save me trying write a pixel shader which "compressed" or "expanded" the UV coordinates based on height (and possibly encountering some akward z-ordering issues?)




[font="helvetica, arial, verdana, tahoma, sans-serif"][size="2"][color="#282828"]Maybe using OpenAR to recognize a special mark would possibly allow to obtain a transform matrix but as MJP noticed, doing this for arbitrary surfaces is going to be a quite a difficult problem.[/font]



[/quote]


I think that because the Kinect gives me the surface topology that is half the battle and I shouldn't need any markers since the transform between what the Kinect sees and how the game renders it is affine (i.e. straight lines remain straight, just in a different location before and after the transform - for those who don't want to wiki ;-) ). I like the idea of computing the matrix using the markers though.. I think that because the projector params don't change and the surface topology is known it would only ever need to be done once per projector (assuming nobody fiddles with the zoom!) hmm... lol it would be looking to give me a 4x4 non-affine homography matrix or something like that?

This topic is closed to new replies.

Advertisement