• ### Popular Now

• 13
• 27
• 9
• 9
• 20

#### Archived

This topic is now archived and is closed to further replies.

# Eliminating fisheye distortion in raytracing

This topic is 5972 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I've been experimenting with raytracing and DirectDraw, and it's been surprisingly fast so far. The problem is this: I have the well-known fisheye distortion problem. Currently, I do the following:
  //I changed this; all are now positive ray.vector.x = (float)(SCREEN_WIDTH/2) - x; ray.vector.y = (float)(SCREEN_HEIGHT/2) - y; ray.vector.z = 256.0f; 
I am aware that, as z increases, my rays become closer and closer to parallel, and therefore distortion disappears. However, I don't want to have a 2o view angle! I know there is a way to eliminate this distortion. Here is what I had thought: The problem is that the rays are all being cast from the origin, whereas they should be cast from their individual pixels. That would simply mean replacing each "sphere.x" with "(sphere.x - ray.origin.x)" in my intersection tests and of course doing the same for the other axes. The ray origins would have to be:
  ray.origin.x = ray.vector.x * ray.vector.z; ray.origin.y = ray.vector.y * ray.vector.z; ray.origin.z = ray.vector.z; 
I tried this, and the sphere I was drawing on the screen became very small. I thought that simply changing the scaling of the sphere or decreasing the view angle (increasing |ray.vector.z|) could resolve the problem. It didn't. Afer a lot of fiddling with the different parameters, I gave up on that, and removed the additional subtractions from my ray-sphere intersection tests. The question now is: How can I eliminate this distortion? Was my theory correct? Is there a better/faster way to eliminate the distortions? Edited by - TerranFury on November 3, 2001 4:28:25 PM

##### Share on other sites
Rays are defined by a base and a vector. All the rays begin at a single point (the viewer''s position) and move off in their defined direction (passing through the pixel they represent).

I believe (I am not certain) that a way to eliminate distortion would be to define the projected screen as a section of a sphere. The angle between any two neighboring rays should be a constant.

So basically, let''s say we are scanning rays left to right with a 45 degree horizonatal field of view. Let''s say we have 90 pixels left to right. The leftmost ray is 22.5 degrees left of looking straight forward. The next ray is 22 degrees left of... and so on. Keep incrementing the ray angle all the way across.

That is my idea; I have never tried it though.

##### Share on other sites
You need to multiply the distance by the inversed cos of the angle if I remember correctly to correct the fish-eye effect.

##### Share on other sites
OMG what are all these tricky stuffs ??

I''m ABSOLUTELY SURE that you have to NORMALIZE THE RAYS!!!
haven''t you been told that raytracing did massive use of normalized rays ?

you have to/should normalize the rays for 3 reasons:

.keep the same distance ratio for every rays
.keep the same angle difference between every ray and its neighbors
.it makes many intersection computations ways easier

thus your code should look like that :

ray.vector.x = x - (float)(SCREEN_WIDTH/2);ray.vector.y = y - (float)(SCREEN_HEIGHT/2);ray.vector.z = focalDistance;ray.vector.length = 1.0 / sqrt( ray.vector.x * ray.vector.x + ray.vector.y * ray.vector.y + ray.vector.z * ray.vector.z )ray.vector.x *= ray.vector.lengthray.vector.y *= ray.vector.lengthray.vector.z *= ray.vector.length

Regards,
Mathieu "POÏ" HENRI

##### Share on other sites
I thought I had tried normalizing, but I tried it again anyway; it did nothing to correct the distortions. So thanks for your help, poi, but it seems that isn''t the solution.

##### Share on other sites
I wrote a ray tracer in college and I had no such distortions with camera view angles less than 90 degrees.

Heres what I did.

You have a camera with it is a 3d position and an orientaion matrix. You have a bunch of objects in the world.

These first set transformations are for ease of coding and bug tracking..Its a lot easier to imagine a camera at the origin looking down the Z at your scene than at some weird angle looking at some point in your scene. plus it makes for building your "screen" and rays a little easier.

First transform all the objects in the world to be in relation to the camera and not the origin. Then subtract the camera position out of everything so that the camera is at 0,0,0 and is looking at everything down the z axis. Then build your "screen" at your near clip distance. For example a near clip distance of 10 with a camera view angle of 90 youll have a screen with its coordinates at (10,10,10) (10,-10,10) (-10,-10,10) (-10,10,10). you then build rays FROM THE ORIGIN or camera position if you dont do the above transformations, NOT FROM THE PIXEL, through each of the virtual pixels. Make sure to NORMALIZE YOUR RAYS to simplify your calculations, thus speeding things up. Plus a normaized ray makes for easy angle calculations for reflections and translucence. You could even do some easy antialiasing by breaking up each pixels into a 4 by 4 grid and sending out a single "random" ray through each grid square and average the rgb results from all of em into the single rgb value for that particular pixel. This technique gives, though i forget what its called..stautic sampling or something like that, an even dispersion of rays and thus yeilds the best anti-aliasing results. Ive seen Ray Tracers that break up each pixel into a 8 by 8 grid but i didnt really notice any better image quality, but if you had a lot of small obects with a lot of reflective and translucent surfaces than this might be necessary to not miss anything. ( when you bounce a ray off an object, like a shpere,
small changes in the placement of where the ray intersects the sphere yeilds large changes in where the ray shoots off to, thus potentially missing something in your scene.

##### Share on other sites
this is from tricks of the Game programing gurus. (one of my older references )

(with an fov of 60)
"We must multiply each ray''s scale, from -30 to +30, by the cos-1 of the same angle -30 to +30. This cancels out the distortion."

that should work

-------------------------------------------------
Don''t take life too seriously, you''''ll never get out of it alive. -Bugs Bunny

##### Share on other sites
I tried to follow the suggestions of Authusian and Jim Adams. The following code produces results which are no different from my previous ones; the distortion is still there...

  Ray ray; ray.vector.x = (float)(SCREEN_WIDTH/2) - x; ray.vector.y = (float)(SCREEN_HEIGHT/2) - y; ray.vector.z = 128.0f; normalize(&ray.vector); Vector middleScreen; middleScreen.x = (float)(SCREEN_WIDTH/2); middleScreen.y = (float)(SCREEN_WIDTH/2); middleScreen.z = 128.0f; normalize(&middleScreen); float rayAngle = dotProduct(ray.vector, middleScreen); float scale = acosf(rayAngle); ray.vector.x *= scale; ray.vector.y *= scale; ray.vector.z *= scale;

##### Share on other sites
The reason why these suggestions aren't working is because:

1) Normalizing a ray does not change it's direction.
2) Scaling a ray does not change it's direction.

Thus, you haven't changed anything.

Try this: (like I said originally)

  scanRays (int nHoriz, int nVert, float hFov){ float vFov; float hDegStep, vDegStep; float hDeg, vDeg; int h, v; Ray ray, vRay; hDegStep = hFov / ( nHoriz - 1 ); vFov = ( nVert / nHoriz ); vDegStep = vFov / ( nVert - 1 ); hFov *= -0.5f; vFov *= -0.5f; // scan the rays left to right; top to bottom for (v = 0; v < nVert; v++) { vDeg = vFov + v * vDegStep; // make a ray which goes right down the center vRay.vector.x = 0.0f; vRay.vector.y = 0.0f; vRay.vector.z = -1.0f; // rotate the ray up or down rotX (&vRay, vDeg); for (h = 0; h < nHoriz; h++) { // copy vRay to ray copyRay (&vRay, &ray); hDeg = hFov + h * hDegStep; // rotate the ray left or right rotY (&ray, hDeg); // do our ray firing here with 'ray' // happily, the ray is already normalized. } }}

EDIT: I am not saying this will remove your distortion, but it will definitely change things. Unlike scaling or normalizing a ray, which does NOTHING with regard to the angle between rays, this will create teh same angle between rays.

I don't know why line breaks don't show up in my code.

___________________________________

Edited by - bishop_pass on November 3, 2001 11:48:58 AM

##### Share on other sites
Let me say a couple of things about the above code.

It will accept any field of view. For example, how about 360 degrees? The resulting ray is already normalized, despite the fact that we do not incur the cost of normalizing the ray. It is reasonably efficient in the sense that only the rotY function is called for each ray.

Here is some of the support code: You''ll have to modify it as my code uses arrays and your rays use structs. Also, these functions assume radians, not degrees.

  void rotX (VERTEX3 V, float r){ float y = V[1]; float z = V[2]; float cosr = (float) cos (r); float sinr = (float) sin (r); V[1] = y * cosr - z * sinr; V[2] = y * sinr + z * cosr;}void rotY (VERTEX3 V, float r){ float x = V[0]; float z = V[2]; float cosr = (float) cos (r); float sinr = (float) sin (r); V[0] = x * cosr + z * sinr; V[2] = -x * sinr + z * cosr;}

You can make the code even more efficient if after immediately entering the scanRays function, you precompute two tables:

  float *rotYCosR = malloc (sizeof (float) * nHoriz);float *rotYSinR = malloc (sizeof (float) * nHoriz);

initialize the above tables with the appropriate cos and sin values and use them instead of calling cos() and sin() in the rotY function. It isn''t necessary to do this for the rotX function because it is only called once for each angle.

___________________________________