• Create Account

### #Actualscniton

Posted 04 July 2012 - 08:04 AM

Does anyone have a link to an article, or some sample code?
Doesn't matter which language.

While my personal code is kind of tailored to my application, MathGeoLib's frustum(.h|.cpp) code is very clean (though I personally prefer the "radar" approach to intersection testing.)

An example of what I described:
If your view transform is something like this:
Translate1*Scale*RotateZ*RotateX*RotateY*Translate2

Translate2' * RotateY' * RotateX' * RotateZ' * Scale' * Translate1'

Now you can extract the position of the camera from the resulting matrix, like in the image on this page.
Note: You'll likely want the left / up / forward vectors to be normalized, what you can do as an optimization is the following:
Translate2' * RotateY' * RotateX' * RotateZ' * Translate1' * Scale' (only apply inverse scaling to the transformation part of the matrix).
That way the columns in the rotation part of the matrix are all unit.

This gives you the camera information.

Take the coordinates of the tapping and convert them from window coordinates to normalized device coordinates, then your ray's direction is simply:
Forward * nearplane distance + Left * x (in NDC) + Up * y (in NDC)
Where forward/left/up are extracted from the camera matrix as described above, the ray starts from your camera's position (translation part extracted from the matrix.)
Note, you might have to change some of the signs for the direction equation depending on the convention you use for the window coordinates origin.

Disclaimer: I don't remember why there's a conversion to NDC nor do I have the time to verify it right now.
Also, I had written in my notes, that this assumes a symmetric frustum, but I can't remember why either.

### #1scniton

Posted 04 July 2012 - 07:42 AM

Does anyone have a link to an article, or some sample code?
Doesn't matter which language.

While my personal code is kind of tailored to my application, MathGeoLib's frustum(.h|.cpp) code is very clean (though I personally prefer the "radar" approach to intersection testing.)

An example of what I described:
If your view transform is something like this:
Translate1*Scale*RotateZ*RotateX*RotateY*Translate2

Translate2' * RotateY' * RotateX' * RotateZ' * Scale' * Translate1'

Now you can extract the position of the camera from the resulting matrix, like in the image on this page.
Note: You'll likely want the left / up / forward vectors to be normalized, what you can do as an optimization is the following:
Translate2' * RotateY' * RotateX' * RotateZ' * Translate1' * Scale' (only apply inverse scaling to the transformation part of the matrix).
That way the columns in the rotation part of the matrix are all unit.

This gives you the camera information.

Take the coordinates of the tapping and convert them from window coordinates to normalized device coordinates, then your ray's direction is simply:
Forward * nearplane distance + Left * x (in NDC) + Up * y (in NDC)
Where forward/left/up are extracted from the camera matrix as described above, the ray starts from your camera's position (translation part extracted from the matrix.)
Note, you might have to change some of the signs for the direction equation depending on the convention you use for the window coordinates origin.

Disclaimer: I don't remember why there's a conversion to NDC nor do I have the time to verify it right now.

PARTNERS