Picking in DX11

Started by
2 comments, last by keym 11 years ago

Hello,

I'm trying to make picking working following this tutorial: http://www.rastertek.com/dx11tut47.html

I'm not sure is it me or the author confuses spaces at the end of this tutorial? Can someone have a fresh look at this? Namely he states that multyplying vector by inverse view matrix we get the result in view space. Shouldn't it be in world space? And then we go from world into object space and there make final test? His ray intersection doesn't take into account sphere position so final test looks like it's in object space, but he also says it's in world space... So yeah, thoughts?

Advertisement

As far as I understand things, a typical view matrix is actually a "inverted" in the sense that if the camera matrix is in the world space, then the required matrix to transform things from world space to the camera's space is actually "inverse camera matrix" or the view matrix or inverse view matrix in this case... to make things complicated. It is just a case of "confusing naming". I use naming "camera matrix" to define camera's location and direction in world space and view matrix is actually the inverted camera matrix.

You can confirm this from many code samples where the view matrix is constructed. Just rarely the code uses the actual matrix inversion, since in the view matrix case the inverse can be calculated easily.

Yes, in the code the the ray is transformed to the local/object space by the inverse world matrix of the sphere. The beauty of things is that in the local space the sphere is located at origo (0,0,0) so translation doesn't have to be accounted in the ray-sphere intersection test.

The advantage of this technique is that it supports also things like scaling / non-uniform scaling for the world matrix. The ray-sphere test remains always the same, since it's just the ray's position and direction changing.

Cheers!

Well... shouldn't this be that simple:

object space ----[world a.k.a. model matrix]----> world space

world space ----[view a.k.a. camera matrix]----> view space

view space ----[projection matrix]----> clip space

object space <----[inverse world a.k.a. model matrix]---- world space

world space <----[inverse view a.k.a. camera matrix]---- view space

view space <----[inverse projection matrix]---- clip space

?

Anyways, this is how it *seems right* to me, but I'm not a guru here. Maybe I'm being picky ;) about naming and that was not the intention of this topic (but still I wanted to clarify naming before I ask my question(s) and make more confusion).

So, the reason I post is because (obviously) I have a problem with picking. The issue here is that in my renderer I use right hand coordinate system, like in OpenGL (for sake of compatibility, I have OGL renderer in this app too and I don't want to negate every needed value to get the same result, it would only make more future errors).

So I construct my projection matrix using D3DXMatrixPerspectiveFovRH() and view matrix using D3DXMatrixLookAtRH(). Before sending them to HLSL I transpose them (for some reason I have to do this, otherwise I get incorrect results [DX stores matrices in row major, but in HLSL they need to be in column major?]). All is sweet and dandy until picking occurs. I'm pretty sure that I'm doing something wrong, because this is my first attempt with renderer independent picking. I follow what's in the tutorial but intersection test gives incorrect results. For sake of simplicity my sphere is at (0,0,0) so I don't have to care about world and invWorld matrices. I'm guessing that something is wrong with my matrices but it's hard to track down.

Also I'm not sure what's going on here (tutorial):


// Adjust the points using the projection matrix to account for the aspect ratio of the viewport.
m_D3D->GetProjectionMatrix(projectionMatrix);
pointX = pointX / projectionMatrix._11;
pointY = pointY / projectionMatrix._22;
 

and how exactly the unprojecting part works. I mean I have mouse coordinates that I rescale into -1, 1 range but how do I get from vec2 to vec3? Where does the 3rd component come from?

Solved.

Looks like all my math was ok but I forgot one thing - my rendering WinAPi control has an offset in x,y (cause I have sidebar and other stuff on the side) and I forgot to take that into account when reading mouse position over the viewport. For instance I got [0,0] at the origin of the window, not the rendering control. Now all works well. Thanks for looking.

This topic is closed to new replies.

Advertisement