• ### Announcements

#### Archived

This topic is now archived and is closed to further replies.

# Eliminating fisheye distortion in raytracing

## Recommended Posts

I've been experimenting with raytracing and DirectDraw, and it's been surprisingly fast so far. The problem is this: I have the well-known fisheye distortion problem. Currently, I do the following:
  //I changed this; all are now positive ray.vector.x = (float)(SCREEN_WIDTH/2) - x; ray.vector.y = (float)(SCREEN_HEIGHT/2) - y; ray.vector.z = 256.0f; 
I am aware that, as z increases, my rays become closer and closer to parallel, and therefore distortion disappears. However, I don't want to have a 2o view angle! I know there is a way to eliminate this distortion. Here is what I had thought: The problem is that the rays are all being cast from the origin, whereas they should be cast from their individual pixels. That would simply mean replacing each "sphere.x" with "(sphere.x - ray.origin.x)" in my intersection tests and of course doing the same for the other axes. The ray origins would have to be:
  ray.origin.x = ray.vector.x * ray.vector.z; ray.origin.y = ray.vector.y * ray.vector.z; ray.origin.z = ray.vector.z; 
I tried this, and the sphere I was drawing on the screen became very small. I thought that simply changing the scaling of the sphere or decreasing the view angle (increasing |ray.vector.z|) could resolve the problem. It didn't. Afer a lot of fiddling with the different parameters, I gave up on that, and removed the additional subtractions from my ray-sphere intersection tests. The question now is: How can I eliminate this distortion? Was my theory correct? Is there a better/faster way to eliminate the distortions? Edited by - TerranFury on November 3, 2001 4:28:25 PM

##### Share on other sites
Rays are defined by a base and a vector. All the rays begin at a single point (the viewer''s position) and move off in their defined direction (passing through the pixel they represent).

I believe (I am not certain) that a way to eliminate distortion would be to define the projected screen as a section of a sphere. The angle between any two neighboring rays should be a constant.

So basically, let''s say we are scanning rays left to right with a 45 degree horizonatal field of view. Let''s say we have 90 pixels left to right. The leftmost ray is 22.5 degrees left of looking straight forward. The next ray is 22 degrees left of... and so on. Keep incrementing the ray angle all the way across.

That is my idea; I have never tried it though.

##### Share on other sites
You need to multiply the distance by the inversed cos of the angle if I remember correctly to correct the fish-eye effect.

##### Share on other sites
OMG what are all these tricky stuffs ??

I''m ABSOLUTELY SURE that you have to NORMALIZE THE RAYS!!!
haven''t you been told that raytracing did massive use of normalized rays ?

you have to/should normalize the rays for 3 reasons:

.keep the same distance ratio for every rays
.keep the same angle difference between every ray and its neighbors
.it makes many intersection computations ways easier

thus your code should look like that :

ray.vector.x = x - (float)(SCREEN_WIDTH/2);ray.vector.y = y - (float)(SCREEN_HEIGHT/2);ray.vector.z = focalDistance;ray.vector.length = 1.0 / sqrt( ray.vector.x * ray.vector.x + ray.vector.y * ray.vector.y + ray.vector.z * ray.vector.z )ray.vector.x *= ray.vector.lengthray.vector.y *= ray.vector.lengthray.vector.z *= ray.vector.length

Regards,
Mathieu "POÏ" HENRI

##### Share on other sites
I thought I had tried normalizing, but I tried it again anyway; it did nothing to correct the distortions. So thanks for your help, poi, but it seems that isn''t the solution.

##### Share on other sites
I wrote a ray tracer in college and I had no such distortions with camera view angles less than 90 degrees.

Heres what I did.

You have a camera with it is a 3d position and an orientaion matrix. You have a bunch of objects in the world.

These first set transformations are for ease of coding and bug tracking..Its a lot easier to imagine a camera at the origin looking down the Z at your scene than at some weird angle looking at some point in your scene. plus it makes for building your "screen" and rays a little easier.

First transform all the objects in the world to be in relation to the camera and not the origin. Then subtract the camera position out of everything so that the camera is at 0,0,0 and is looking at everything down the z axis. Then build your "screen" at your near clip distance. For example a near clip distance of 10 with a camera view angle of 90 youll have a screen with its coordinates at (10,10,10) (10,-10,10) (-10,-10,10) (-10,10,10). you then build rays FROM THE ORIGIN or camera position if you dont do the above transformations, NOT FROM THE PIXEL, through each of the virtual pixels. Make sure to NORMALIZE YOUR RAYS to simplify your calculations, thus speeding things up. Plus a normaized ray makes for easy angle calculations for reflections and translucence. You could even do some easy antialiasing by breaking up each pixels into a 4 by 4 grid and sending out a single "random" ray through each grid square and average the rgb results from all of em into the single rgb value for that particular pixel. This technique gives, though i forget what its called..stautic sampling or something like that, an even dispersion of rays and thus yeilds the best anti-aliasing results. Ive seen Ray Tracers that break up each pixel into a 8 by 8 grid but i didnt really notice any better image quality, but if you had a lot of small obects with a lot of reflective and translucent surfaces than this might be necessary to not miss anything. ( when you bounce a ray off an object, like a shpere,
small changes in the placement of where the ray intersects the sphere yeilds large changes in where the ray shoots off to, thus potentially missing something in your scene.

##### Share on other sites
this is from tricks of the Game programing gurus. (one of my older references )

(with an fov of 60)
"We must multiply each ray''s scale, from -30 to +30, by the cos-1 of the same angle -30 to +30. This cancels out the distortion."

that should work

-------------------------------------------------
Don''t take life too seriously, you''''ll never get out of it alive. -Bugs Bunny

##### Share on other sites
I tried to follow the suggestions of Authusian and Jim Adams. The following code produces results which are no different from my previous ones; the distortion is still there...

  Ray ray; ray.vector.x = (float)(SCREEN_WIDTH/2) - x; ray.vector.y = (float)(SCREEN_HEIGHT/2) - y; ray.vector.z = 128.0f; normalize(&ray.vector); Vector middleScreen; middleScreen.x = (float)(SCREEN_WIDTH/2); middleScreen.y = (float)(SCREEN_WIDTH/2); middleScreen.z = 128.0f; normalize(&middleScreen); float rayAngle = dotProduct(ray.vector, middleScreen); float scale = acosf(rayAngle); ray.vector.x *= scale; ray.vector.y *= scale; ray.vector.z *= scale;

##### Share on other sites
The reason why these suggestions aren't working is because:

1) Normalizing a ray does not change it's direction.
2) Scaling a ray does not change it's direction.

Thus, you haven't changed anything.

Try this: (like I said originally)

  scanRays (int nHoriz, int nVert, float hFov){ float vFov; float hDegStep, vDegStep; float hDeg, vDeg; int h, v; Ray ray, vRay; hDegStep = hFov / ( nHoriz - 1 ); vFov = ( nVert / nHoriz ); vDegStep = vFov / ( nVert - 1 ); hFov *= -0.5f; vFov *= -0.5f; // scan the rays left to right; top to bottom for (v = 0; v < nVert; v++) { vDeg = vFov + v * vDegStep; // make a ray which goes right down the center vRay.vector.x = 0.0f; vRay.vector.y = 0.0f; vRay.vector.z = -1.0f; // rotate the ray up or down rotX (&vRay, vDeg); for (h = 0; h < nHoriz; h++) { // copy vRay to ray copyRay (&vRay, &ray); hDeg = hFov + h * hDegStep; // rotate the ray left or right rotY (&ray, hDeg); // do our ray firing here with 'ray' // happily, the ray is already normalized. } }}

EDIT: I am not saying this will remove your distortion, but it will definitely change things. Unlike scaling or normalizing a ray, which does NOTHING with regard to the angle between rays, this will create teh same angle between rays.

I don't know why line breaks don't show up in my code.

___________________________________

Edited by - bishop_pass on November 3, 2001 11:48:58 AM

##### Share on other sites
Let me say a couple of things about the above code.

It will accept any field of view. For example, how about 360 degrees? The resulting ray is already normalized, despite the fact that we do not incur the cost of normalizing the ray. It is reasonably efficient in the sense that only the rotY function is called for each ray.

Here is some of the support code: You''ll have to modify it as my code uses arrays and your rays use structs. Also, these functions assume radians, not degrees.

  void rotX (VERTEX3 V, float r){ float y = V[1]; float z = V[2]; float cosr = (float) cos (r); float sinr = (float) sin (r); V[1] = y * cosr - z * sinr; V[2] = y * sinr + z * cosr;}void rotY (VERTEX3 V, float r){ float x = V[0]; float z = V[2]; float cosr = (float) cos (r); float sinr = (float) sin (r); V[0] = x * cosr + z * sinr; V[2] = -x * sinr + z * cosr;}

You can make the code even more efficient if after immediately entering the scanRays function, you precompute two tables:

  float *rotYCosR = malloc (sizeof (float) * nHoriz);float *rotYSinR = malloc (sizeof (float) * nHoriz);

initialize the above tables with the appropriate cos and sin values and use them instead of calling cos() and sin() in the rotY function. It isn''t necessary to do this for the rotX function because it is only called once for each angle.

___________________________________

##### Share on other sites
TerranFury, go edit your first post. Put a line break in that overly long comment in your code and we can all view this thread without having to scroll sideways.

##### Share on other sites
Oops! I believe the concept is sound, but the rotation on the Y axis has got to be different. Instead of rotating on the Y axis, we have to rotate on a special axis, call it A. To derive A, we create a vector coincident with the Y axis, and then rotate it along with the ray on the X axis.

Modified code follows:

  scanRays (int nHoriz, int nVert, float hFov){ float vFov; float hDegStep, vDegStep; float hDeg, vDeg; int h, v; Ray ray, vRay; Vector A; hDegStep = hFov / ( nHoriz - 1 ); vFov = ( nVert / nHoriz ); vDegStep = vFov / ( nVert - 1 ); hFov *= -0.5f; vFov *= -0.5f; // scan the rays left to right; top to bottom for (v = 0; v < nVert; v++) { vDeg = vFov + v * vDegStep; // make a ray which goes right down the center vRay.vector.x = 0.0f; vRay.vector.y = 0.0f; vRay.vector.z = -1.0f; // create an axis coincident with the Y axis A.x = 0.0f; A.y = 1.0f; A.z = 0.0f; // rotate the ray up or down rotX (&vRay, vDeg); // rotate the axis rotX (&A, vDeg); for (h = 0; h < nHoriz; h++) { // copy vRay to ray copyRay (&vRay, &ray); hDeg = hFov + h * hDegStep; // rotate the ray on our arbitrary axis A rotArbAxis (&ray, &A, hDeg); // do our ray firing here with 'ray' // happily, the ray is already normalized. } }}

So, I don't have the code handy at the moment for rotating on an arbitrary axis, but I'll try and get it later. Unfortuantely, this means we have to do more work in the inner loop. However, I then think we get the really cool ability to use any field of view we want, including 360 degrees.

___________________________________

Edited by - bishop_pass on November 3, 2001 2:07:18 PM

##### Share on other sites
The idea behind scaling the vector should correct the problem because if the rays farthest from the center move farther per step they get there sooner, therefore the rays for an inverted fisheye of there own. When you add one curve to the same curve inverted, you get a straight line.

##### Share on other sites
The idea behind scaling the vector should correct the problem because if the rays farthest from the center move farther per step they get there sooner. Therefore, the rays form an inverted fisheye of their own. Think about it, when you add one curve to the same curve inverted, you get a straight line.

edit: change some spelling and structure to make it more readable

##### Share on other sites
Scaling a vector makes a vector shorter or longer. That will have no effect on what the ray intersects in the scene and thus will have no effect on what is visible in a pixel and thus will have no effect on distortion.

___________________________________

##### Share on other sites
I''m going to have to look a little more closely at that code. For now at least, though, it seems that, with a 45o FOV, there is no distortion (that I can see) anyway! So, in the interest of speed (both at runtime and in coding) I will likely ignore this issue for now (since this is a feeble RTRT attempt). If I decide to code a high-quality raytracer, then I will almost certainly use these suggestions, though, so thanks everyone.

##### Share on other sites
The narrower your field of view, the less distortion you will have even using conventional methods. I am now leaning towards my original code with rotation on the Y axis. I think it would work really well except for extremely large vertical field of views. It should work just fine for any horizontal field of view.

___________________________________

##### Share on other sites
ok, once again i will try to explain this, but without use of graphics it is hard to convey this.

When the rays are cast from a point, they expand in a radial maner, produceing an arch.

This is because they are all traveling the same distance in the same amount of time, but at slightly different angles.

In order to compensate, you make the rays that are farthest from the center ray, longer.

the function is inverted cos of the angle * the angle. this will have to be calculated for every ray you cast, as they will all be at different angles from the center.

in effect, this is what bishop_pass said
"I believe (I am not certain) that a way to eliminate distortion would be to define the projected screen as a section of a sphere."

if this is still hard to understand, i guess i''ll have to start uploading pictures

##### Share on other sites
I understand what you''re saying Anon, but Bishop is right; think of it this way: What you''re doing is scaling your vectors. This does nothing to change the direction those vectors point at, so they still point at the same thing.

Maybe an image will help. Look at this:

    xxx  xxxxxxx xxxxxxxxx xxxxxxxxx  xxxxxxx    xOx        _       /\         \          \           \            \             \              x

And now look at this:

    xxx  xxxxxxx xxxxxxxxx xxxxxxxxx  xxxxxxx    xOx        _       /\         \          \           x

The "O"s on the sphere are the intersection points.

Even though the vector in the second image is much shorter, and therefore the value of T (or the distance) will be greater for it''s intersection with the sphere in the first one, it still intersects the sphere at the same point. Because each ray corresponds with one pixel, changing the lengths of your vectors may change the calculated intersection points, and thereby affect normals and the like, but it will not change where that ray intersects the sphere; it will not affect what part of the sphere is on what pixel. It doesn''t eliminate the distortion.

##### Share on other sites
it''s not a matter of where the ray hit''s the object, but a matter of when , if the outside ray takes longer than the center ray to colide with an object then the ray tracer will think that the object is farther away and scale it acordingly.

  ______________ \ | / \ | / \|/ *

you see if the vectors farthest from the center are scaled, they all hit the object at the same time, otherwise they will hit the wall later and make the ray tracer think that the object is farther away, hence the spherical distortion.

##### Share on other sites
Anon, I see what you mean; I had thought you meant an additional "inverse fisheye" warping. Actually, what you describe now is what I'm doing anyway, since I'm not normalizing my primary rays. I still say it doesn't affect fisheye distortion, but since I have effectively eliminated that problem, it doesn't matter anyway.

[Edited to remove question I answered myself]

Edited by - TerranFury on November 4, 2001 2:50:15 PM

##### Share on other sites
well... I do not want to say something stupid, but the fisheye effect is something that is quite normal. Every picture and every film has a fisheye effect. But it is quite low, as the FOV is very small. Go take a camera, make a picture and calculate the FOV. It won´t be much more than 20 degrees I think. Many people use FOVs of 90°. If you suppose your monitor has a width of 40cm, then you will have to see your picture from a distance of 20cm to get correct results. Move that close to your monitor, and your fisheye effect will disappear. What I want to say is: Use a more realistic FOV, like 30 degrees for a raytraced picture. If you do somthing like rotating about some axis or some stuff, you might reduce the fisheye effect. But your picture is no longer correct. By the way, your very first algorithm is correct (to my knowledge) but with a screen resolution of 640x480 you will have a FOV of about 100 degrees. This is _very_ unrealistic and results in the fisheye distortion (unless you move very close to your monitor)

##### Share on other sites
I''m currently using an FOV of 55 degrees (A big change from my previous one). Why 55? That''s what POV-ray uses, because it supposedly is the FOV of the "average human." I''m doing this:

  const unsigned int SCREEN_WIDTH = 640;const unsigned int SCREEN_HEIGHT = 480;const float PI = (float)atan(1) * 4.0f;const float RTOD = 180/PI;const float DTOR = PI/180;const float FOV = 55.0f; //degrees < 180; 55 is average human FOVconst float FOCAL_LENGTH = (float)(SCREEN_WIDTH/2)/tanf(FOV*0.5f*DTOR);

##### Share on other sites
quote:
Original post by Chock
Go take a camera, make a picture and calculate the FOV. It won´t be much more than 20 degrees I think

Unfortuantely, Chock is chock full of misinformation. I frequently shoot pictures with lenses which have a field of view of about 84 degrees. This corresponds to a 24mm lens when shooting in the 35mm format. I shot this image with a Nikkor 24-50mm lens set at 24mm with a 35mm Nikon camera. The lens was set at 24mm. 24mm lenses are a popular focal length for 35mm wide angle work among photographers. These lenses are designed for minimal distortion. Other popular lenses with minimal distortion include the Nikkor 20mm lens which has a field of view of 94 degrees. Here is another image shot with my 24-50mm lens set at 24mm.

I shot this image with a Caltar 90mm lens using a 4x5 camera. When shooting in the larger format of 4x5, a 90mm lens translates to a field of view of about 68 degrees

___________________________________

##### Share on other sites
So I was completely wrong here. Sorry. But I still believe what I said. You said that those lenses are made for minimal distortion. And this is probably the point. I suppose that those lenses create an inverse fisheye effect, and this way simulate a smaller FOV, but the viewer farer away. I suppose (though I do not know) that you would get a similar result, if you used a "standard" lens but took the picture from farer away (if there would be no wall behind you).
But anyways, this is not what you would see if you put a window of that size there and looked through it. If you want to emulate this inverse fisheye effect in your engine, you will have to look at a specification of your lens, and look what is mathematically behind it. I did not know that this is what you wanted to do.

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627654
• Total Posts
2978444

• 10
• 12
• 22
• 13
• 33