# raycasting: how to properly apply a projection matrix?

This topic is 3007 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, I am currently working on some raycasting in GLSL which works fine. Anyways I want to go from orthogonal projection to perspective projection now but I am not sure how to properly do so. Are there any good links on how to use a projection Matrix with raycasting? I am not even sure what I have to apply the matrix to (propably to the ray direction somehow or the actual positions of the objects?). Right now I do it like this (pseudocode):
vec3 rayDir = vec3(0.0, 0.0, -1.0); //down the negative -z axis in parallel;

but now I would like to use a projMatrix which works similar to gluPerspective function so that I can simply define an aspect ratio, fov and near and far plane. So basically, can anybody provide me a chunk of code to set up a proj matrix similar to gluProjection does? And secondly tell me if it is correct to multiply it with the rayDirection? Thanks!

##### Share on other sites
You don't have to apply projection explicitly.

If you shoot the rays in a proper way, than it will produce the perspective effect.
Think about a real eye/camera, there aren't some projection stuff there, the fact that rays are shot from one point instead of parallel lines is just enough.

I can't say more about FOV and aspect ratio, work it out with a piece of paper and a pen. All you have to do is shoot the rays from one point (eye) to the pixels of your monitor virtually.

You will find out it's very straightforward.

##### Share on other sites
well I want to be able to apply the same perspective settings to some fixed geometry drawn with openGL (bounding spheres for 3D metaballs) as to the raycasted scene, thats why i am asking :)

##### Share on other sites
The same. if you know the distance from the eye to the screen, and the sizes of the screen (more precisely their ratio), the ratio of the height and width of the screen, you know every parameters of the frustum.

If that's not enough, then I suggest you to think a bit. If I could do it without any help, then you can do it as well.

##### Share on other sites
well I understand all that, I just dont know how I practically apply it to the raycasted scene.-
If I know the data of the frustum, what do i do next?

is the distance from the eye to the screenPlane the near plane value?

You were saying if I shoot the rays in a proper way, than it will produce the proper effect, but I don't know how I shoot them that way, so that is the question :)

Right now I shoot them similar to this:

vec3 pos;float stepSize = 0.0;for(int i=0; i<50; i++){pos = rayOrigin + rayDir * stepSize;stepSize += someGoodValue;}

so my thoughts are that I have to change something to the rayOrigin and rayDir.
right now the rayOrigin is simply the pixel on screen, in GLSL like this:

vec2 rayOrigin = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy;

I am self tought, so I am sorry if I can't solve this myself.

edit:

I just found this useful oldschool link, which should help me out.- I will report if it works out, thanks so far!

[Edited by - mokaschitta on March 1, 2010 5:18:09 AM]

##### Share on other sites
Sorry, I was a bit rude, but I have a feeling that you didn't try paper and pen, as I suggested. And I haven't learn Comp. Graphics either.

You see, the frustums are the same, it's easy to calculate one of them from the other.

Note, that the blue squares and the sizes of the frustum aren't real world sizes (centimeters), but the sizes of the frustum in openGL.
The center is (0,0,0), the distance of the virtual screen is the near plane distance, size is the width/height parameters of the openGL frustum, you see, everything is the same, thus very straightforward.

The orientation of the frustum is up to you: mostly it is oriented in the negative z axis. (openGL does that, so you have to use that.)

But you have freedom, you have the possibility to transform the rays instead of the scene, so you can do real camera transformations, but that may be too confusing yet, and that would be in conflict with the openGL way.

##### Share on other sites
okay, thank you!

So I will try to convert what you said to another simple GLSL snippet:

vec3 screenPos = vec3(-1.0 + 2.0 * gl_FragCoord.xy / resolution.xy, 0.0);vec3 camPos = vec3(0.0, 0.0, nearPlane); //+nearplane since I want to look down negative zvec3 rayDir = normalize(screenPos-camPos);for(int i=0; i<50; i++){    vec3 pos = screenPos + rayDir*stepSize;    stepSize += someGoodValue;}

I will try that now, maybe its bullshit, but at least thats how i understand what you are saying.

thanks.

##### Share on other sites
My ray-caster was a full CPU stuff for DOS, so .....

##### Share on other sites
Maybe I am wrong, because the shader stuff makes the situation totally different, so others should help you.

##### Share on other sites
Okay, I got it.- It was actually as simple as you said in the beginning.

this is the openGL camera setup for my bounding sphere pass:
	glMatrixMode(GL_PROJECTION);	glLoadIdentity(); 	gluPerspective(60.0, (float)ofGetWidth()/(float)ofGetHeight(), 1.0, 100.0);	glMatrixMode(GL_MODELVIEW);	glLoadIdentity(); 	gluLookAt(0.0, 0.0, 4.0, 0.0, 0.0, 0.0, 0, 1, 0);

and then in the shader I simply scale each fragment position according to the size of the nearplane like this:

uniform vec3 camPos; //camera positionuniform vec3 nearPlane; //this contains the width and the height of the nearPlane aswell as the distance from the cameravec3 rayOrigin = vec3(-nearPlane.xy/2.0+(gl_FragCoord.xy/resolution.xy)*nearPlane.xy, camPos.z-near.z);//screenpos scaled to the size of the nearPlanevec3 rayDir = normalize(screenPos-camPos);...

this is propably not the best solution but for a camera simply looking down the negative z axis this works perfectly fine and is good enough.

I tested it for now by simply raycasting a sphere and compare it to an identical glutSolidSphere which seems to be just right.

• 15
• 9
• 13
• 41
• 15