Jump to content

  • Log In with Google      Sign In   
  • Create Account

Projectors and Perspective Matrices


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 _JohnHardy   Members   -  Reputation: 100

Like
0Likes
Like

Posted 13 March 2012 - 04:17 PM

Hi all,

I've been working on a game over the last week which uses a projector to project little cars onto a table. Then it uses the Kinect to sense stuff on the table, which it then turns into terrain for the game. The effect is really simple, and pretty cool! Here are some pics:

http://goo.gl/nsy1w

However, there is a visual problem with the projection perspective maths which I would really love to solve. If I understand correctly, the maths should be pretty simple for someone who knows how. :-)

In the pic below you can see that as the height of a virtual object in the scene increases (the vertex pos are written as y) then the more "compressed" it becomes due to the shrinking frustrum of the projector.

Posted Image

What I would like to do, is calculate a perspective matrix, takes account of this and ensures that vertices with a higher "y" value are projected in the right place given the frustrum of the projector. Unfortuanately, I lack the maths skills needed to ask properly, so I hope my picture will do instead!

Here is an annotated pic which shows the problem in the real world:

Posted Image



Any help you can give me on how to construct that perspective matrix would be very much appreciated!

Cheers!


John

Sponsor:

#2 Purebe   Members   -  Reputation: 100

Like
0Likes
Like

Posted 13 March 2012 - 05:35 PM

I have no idea but I have to comment on how awesome that is!
Posted Image

#3 _JohnHardy   Members   -  Reputation: 100

Like
0Likes
Like

Posted 13 March 2012 - 06:32 PM

Haha cheers! :)

#4 _JohnHardy   Members   -  Reputation: 100

Like
0Likes
Like

Posted 17 March 2012 - 06:58 PM

Any ideas folks?

John

#5 MJP   Moderators   -  Reputation: 11380

Like
0Likes
Like

Posted 17 March 2012 - 07:37 PM

I don't think this is something you can fix just by choosing a proper perspective projection. This is because the surface you're projecting to isn't flat, and thus you need to need to account for the varying height at each different pixel that you render. I don't think it would be too hard to do with a fullscreen pass that calculates new UV's with an approach similar to parallax mapping, but I'll have to think about it some more to come up with an exact solution.

Really cool project, btw!

#6 Álvaro   Crossbones+   -  Reputation: 13326

Like
0Likes
Like

Posted 17 March 2012 - 08:17 PM

I am not sure I understand what the problem is. Presumably your virtual scene contains the same ramps as you have in the real scene. You render from the point of view of the projector and then project. What part of this doesn't work?

#7 Krohm   Crossbones+   -  Reputation: 3119

Like
0Likes
Like

Posted 19 March 2012 - 02:41 AM

Maybe using OpenAR to recognize a special mark would possibly allow to obtain a transform matrix but as MJP noticed, doing this for arbitrary surfaces is going to be a quite a difficult problem.

For multiple flat surfaces, marking the edges might work, but the white paper sheet in the background... I'm fairly scared. And that requires all the marks to be visible.

#8 _JohnHardy   Members   -  Reputation: 100

Like
0Likes
Like

Posted 19 March 2012 - 02:02 PM

Thanks for the feedback. :-)

I am not sure I understand what the problem is. Presumably your virtual scene contains the same ramps as you have in the real scene. You render from the point of view of the projector and then project. What part of this doesn't work?


Yup! The problem is that light from a projector spreads out from the lens, widening until it reaches the projection surface. On a big flat surface like a wall this is fine. :) However, when something is raised up on the table the light doesn't have chance to spread out fully. This means that the wrong part of the image is shown on the raised bit.

I'm not so worried about the arbituary surface stuff, because, after a bit of processing, the Kinect sensor gives me (more or less) the topology of the table as a mesh.

The reason I thought of using a perspective matrix was that then, in a vertex shader, the vertices of this mesh can then be transformed into screenspace (out.Pos = in.Pos * W * V * P) based on where they intersect the frustrum of the projector.

The trick would just be modifying P (the perspective matrix) to apply a scaling based on the 'y' value of each vertex (probably with some other linear coefficient + offset) to move it in x,z (assuming y is looking into the screen).

So its not designed to be perfect, but at least allow me to manually correct for the the throw of the projector, which is a fixed source of error. :-) I guess I just need to get deep into the anatomy of a perspective matrix to see what to change.

Posted Image



This is because the surface you're projecting to isn't flat, and thus you need to need to account for the varying height at each different pixel that you render.


G

ood point. I was hoping that by using a vertex shader rather than a pixel shader, the rasteriser would correct for that when it interpolates the vertices before sending them to the pixel shader? That would save me trying write a pixel shader which "compressed" or "expanded" the UV coordinates based on height (and possibly encountering some akward z-ordering issues?)


Maybe using OpenAR to recognize a special mark would possibly allow to obtain a transform matrix but as MJP noticed, doing this for arbitrary surfaces is going to be a quite a difficult problem.


I think that because the Kinect gives me the surface topology that is half the battle and I shouldn't need any markers since the transform between what the Kinect sees and how the game renders it is affine (i.e. straight lines remain straight, just in a different location before and after the transform - for those who don't want to wiki ;-) ). I like the idea of computing the matrix using the markers though.. I think that because the projector params don't change and the surface topology is known it would only ever need to be done once per projector (assuming nobody fiddles with the zoom!) hmm... lol it would be looking to give me a 4x4 non-affine homography matrix or something like that?




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS