Ray tracer distortion

Started by
9 comments, last by cignox1 16 years, 9 months ago
Hello. I'm at the very early stages of writing a ray tracer, and I'm getting what seems like an incorrect behavior. Suppose I start with a sphere positioned at the center of the field of view, and then move it (for example) to the left, paralel to the front clipping plane. What I get is that the closer the sphere gets to the edge of the screen the more it gets squeezed along the x axis, becoming increasingly elliptical (it gets smaller in the y direction too, but to a much smaller degree). Is this normal behavior for a ray tracer, or am I doing smt wrong? edit3: Got a different host for the screenshots: center: left: [Edited by - technobot2 on June 25, 2007 7:30:54 PM]
Advertisement
Hey mate, maybe you should try making your field of view a bit narrower! If your problem is that the objects get a bit stretched then moved away from the center, then that's probably the cause. You get the same effect in games like quake if you set the field of view to be too wide.
My field of view is currently 90 degrees vertically. I know this is rather high, and would cause a similar squeezing effect in conventional projection/rasterization engines, but I'm not sure about the expected behavior of a ray tracer.. In any case, the behavior I would expect to see, regardless of the field of view, is this: as the sphere moves sideways, it gets a bit smaller due to increased distance from the camera, but remains circular by appearance, rather than becoming elliptic...

P.S.: And if it does become elliptic, shouldn't it be stretched instead of squeezed, if one considers how the rays are mapped to the screen?
well u will get distortion at the edges but that looks a bit extreme (even for a high fov like 90), also it should be distorting the other way
(perhaps youve flipped the x with the y's somewhere )
I've just rewritten the ray casting code from scratch, using a slightly different approach, and it seems to behave more correctly now (same FOV):

center: left:

Now the question is - can this stretching be completely eliminated somehow, perhaps by mapping the rays a bit differently? Because if I take a ball in my hand and move it sideways, it still looks circular in reality (as it should)..
Quote:Original post by technobot2
My field of view is currently 90 degrees vertically. I know this is rather high, and would cause a similar squeezing effect in conventional projection/rasterization engines, but I'm not sure about the expected behavior of a ray tracer..


A FOV is a FOV, it doesn't matter if it's rasterization or raytracing.

Quote:Original post by technobot2or am I doing smt wrong?


Probably. What math/code are you using ?
Quote:Original post by technobot2
Now the question is - can this stretching be completely eliminated somehow, perhaps by mapping the rays a bit differently? Because if I take a ball in my hand and move it sideways, it still looks circular in reality (as it should)..


The screen is like a twod window to a threed world. If you move your head closer to the screen, then the amount of space the screen takes in your own FOV is larger. If you move it back then it becomes smaller.

If you match the fov you use to do your computation and the approximate fov of the screen based your head position (don't move it), and your code is correct.. Then there should be no apparent distortion. But as soon as you move the head without correcting it in the program, the illusion is broken.

The other problem is that you have two eyes, but can only show one sphere position on the screen (unless you have polarized glasses, or LCD glasses, or colored glasses etc..)

LeGreg
Quote:Original post by LeGreg
What math/code are you using ?

The new code, in pascal, is as follows. I removed the irrelevant parts:

//// NOTE: x grows to the right, y grows up, z grows into the screen//constructor TCamera.Create(const Position: T3DVector; VertFOV: Single);begin  ...  // find top-left corner of the rasterized rectangle (for now without X).  //  Y gets the sin, Z gets the cos:  FTopLeft.X := 0;  SinCos(DegToRad(0.5 * FVerticalFOV), FTopLeft.Y, FTopLeft.Z);end; { TCamera.Create }//------------------------------------------------------------------------------procedure TCamera.SetupRasterizer(const Width, Height: Word);begin  // Adjust the X coord of the top-left corner of the rasterized rectangle.  //  Note that X is negative here:  FTopLeft.X := -FTopLeft.Y * Width / Height;  // Find the X and Y step sizes:  FXStep := -2 * FTopLeft.X / (Width - 1);  FYStep := -2 * FTopLeft.Y / (Height - 1);  // Set the last ray vector to point to the last pixel (bottom-right corner  //  of the rasterized rectangle). Then when we call TCamera.NextRay for the  // first time, it will go to the first pixel:  FLastRayVector.X := -FTopLeft.X - 0.01 * FXStep;  FLastRayVector.Y := -FTopLeft.Y + 0.01 * FYStep;  FLastRayVector.Z :=  FTopLeft.Z;end; { TCamera.SetupRasterizer }//------------------------------------------------------------------------------procedure TCamera.NextRay(var Ray: TRay);begin  // Calculate the next ray vector (reminder: TopLeft.X < 0, TopLeft.Y > 0):  FLastRayVector.X := FLastRayVector.X + FXStep;  if (FLastRayVector.X  > -FTopLeft.X) then  begin    FLastRayVector.X := FTopLeft.X - 0.01 * FXStep;    FLastRayVector.Y := FLastRayVector.Y + FYStep;    if (FLastRayVector.Y < -FTopLeft.Y) then      FLastRayVector.Y := FTopLeft.Y + 0.01 * FYStep;  end;  // Use that to construct the ray:    Ray.Origin := GlobalPos; // global position of camera  Ray.Direction := FLastRayVector;  NormalizeVector(Ray.Direction);end; { TCamera.NextRay }


Quote:Original post by LeGreg
The screen is like a twod window to a threed world. If you move your head closer to the screen, then the amount of space the screen takes in your own FOV is larger. If you move it back then it becomes smaller.

If you match the fov you use to do your computation and the approximate fov of the screen based your head position (don't move it), and your code is correct.. Then there should be no apparent distortion. But as soon as you move the head without correcting it in the program, the illusion is broken.


Ok. I agree with that. But still, if you look from some point (suppose the camera position) towards a sphere, it should look circular in any direction. The reason why it starts looking elliptic near the edge of the screen (as I understant it) is because there the screen is no longer perpendicular to the eye-sphere vector (it's like cutting a cone diagonally). So I'm thinking maybe one could map the rays differently, such that the screen is perpindicular to the ray in any direction (essentially you'd get a spherical screen). But I guess then I'll get other kinds of distortion (probably a fish-eye effect)?
Quote:Original post by technobot2
So I'm thinking maybe one could map the rays differently, such that the screen is perpindicular to the ray in any direction (essentially you'd get a spherical screen).


Sure you could have a spherical screen (if you can afford one like the imax dome, or geode in France), but more realistically how do you project that spherical screen back onto your regular planar screen ?

Nonetheless, the spherical screen will allow you a large FOV (sometime as high as 360 degrees horizontally), BUT distortion will still be present as long as you've not positionned yourself in the optical center.

The way to avoid distortion in all situations would be for example to have a portable projection device attached to your head. That way your head = optical center (assuming the image you project has the right parameters..).

LeGreg
Quote:Original post by LeGreg
Sure you could have a spherical screen (if you can afford one like the imax dome, or geode in France), but more realistically how do you project that spherical screen back onto your regular planar screen ?

I meant that the spherical screen is virtual - basically the idea was to cast your rays as if you have a spherical screen centered on the camera (instead of the flat one). But each ray still corresponds to a pixel on the flat screen (so the spherical to flat mapping results automatically - and I suspect it would result in a fish-eye effect, as I stated earlier).

Anyway, for now it seems the new code is working more or less correctly (right?), so I'll leave it at that. All that extra fiddling with how the rays are mapped will have to wait until I implement support for more complex scenes - so that the effects of the modifications can be more thoroughly tested.

This topic is closed to new replies.

Advertisement