Jump to content

  • Log In with Google      Sign In   
  • Create Account

_JohnHardy

Member Since 13 Mar 2012
Offline Last Active Mar 20 2012 03:14 PM

Posts I've Made

In Topic: Projectors and Perspective Matrices

19 March 2012 - 02:02 PM

Thanks for the feedback. :-)

I am not sure I understand what the problem is. Presumably your virtual scene contains the same ramps as you have in the real scene. You render from the point of view of the projector and then project. What part of this doesn't work?


Yup! The problem is that light from a projector spreads out from the lens, widening until it reaches the projection surface. On a big flat surface like a wall this is fine. :) However, when something is raised up on the table the light doesn't have chance to spread out fully. This means that the wrong part of the image is shown on the raised bit.

I'm not so worried about the arbituary surface stuff, because, after a bit of processing, the Kinect sensor gives me (more or less) the topology of the table as a mesh.

The reason I thought of using a perspective matrix was that then, in a vertex shader, the vertices of this mesh can then be transformed into screenspace (out.Pos = in.Pos * W * V * P) based on where they intersect the frustrum of the projector.

The trick would just be modifying P (the perspective matrix) to apply a scaling based on the 'y' value of each vertex (probably with some other linear coefficient + offset) to move it in x,z (assuming y is looking into the screen).

So its not designed to be perfect, but at least allow me to manually correct for the the throw of the projector, which is a fixed source of error. :-) I guess I just need to get deep into the anatomy of a perspective matrix to see what to change.

Posted Image



This is because the surface you're projecting to isn't flat, and thus you need to need to account for the varying height at each different pixel that you render.


G

ood point. I was hoping that by using a vertex shader rather than a pixel shader, the rasteriser would correct for that when it interpolates the vertices before sending them to the pixel shader? That would save me trying write a pixel shader which "compressed" or "expanded" the UV coordinates based on height (and possibly encountering some akward z-ordering issues?)


Maybe using OpenAR to recognize a special mark would possibly allow to obtain a transform matrix but as MJP noticed, doing this for arbitrary surfaces is going to be a quite a difficult problem.


I think that because the Kinect gives me the surface topology that is half the battle and I shouldn't need any markers since the transform between what the Kinect sees and how the game renders it is affine (i.e. straight lines remain straight, just in a different location before and after the transform - for those who don't want to wiki ;-) ). I like the idea of computing the matrix using the markers though.. I think that because the projector params don't change and the surface topology is known it would only ever need to be done once per projector (assuming nobody fiddles with the zoom!) hmm... lol it would be looking to give me a 4x4 non-affine homography matrix or something like that?

In Topic: Projectors and Perspective Matrices

17 March 2012 - 06:58 PM

Any ideas folks?

John

In Topic: Projectors and Perspective Matrices

13 March 2012 - 06:32 PM

Haha cheers! :)

PARTNERS