Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ray tracer - perspective distortion


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
11 replies to this topic

#1 Suen   Members   -  Reputation: 160

Like
0Likes
Like

Posted 15 December 2012 - 07:38 PM

I'm writing a ray tracer and while I have my simple scene up and running I've encountered what seems to be a common problem. My scene suffers from perspective distortion and this is of course most noticable on spheres. Basically when I create a sphere who's center is not focused on the z-axis of my camera the sphere get elongated in a radial manner. I've attached an image of my scene to show what I mean. The green and blue sphere shows the distortion quite well.

I do understand that this effect is bound to occur due to the nature of perspective projection. Having a rectangular projection plane will create this distortion and after seeing a helpful image on the topic I at least have a basic grasp as to why this distortion occurs. I believe I even saw the same effect on a real photo when I was searching for more information about it. The thing is that I don't have the slightest clue on how to lessen this distortion.

This is the code which renders my scene:

const int imageWidth = 600;
const int imageHeight = 600;

float aspectR = imageWidth/(float)imageHeight;
float fieldOfView = 60.0f; // Field of view is 120 degrees, need half of it.

Image testImage("image1.ppm", imageWidth, imageHeight);

int main()
{
	 /*
	   building scene here for now...
	 */

	 std::cout << "Rendering scene..." << std::endl;

	 renderScene(sceneObjects, 5);

	 std::cout << "Scene rendered" << std::endl;

	 return 0;
}

//Render the scene
void renderScene(Object* objects[], int nrOfObjects)
{
	 //Create light and set light properties
	 Light light1(glm::vec3(5.0f, 10.0f, 10.0f));
	 light1.setAmbient(glm::vec3(0.2f));
	 light1.setDiffuse(glm::vec3(1.0f));

	 //Create a ray with an origin and direction. Origin will act as the
	 //CoP (Center of Projection)
	 Ray r(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f));

	 //Will hold ray (x,y)-direction in world space
	 float dirX, dirY;

	 //Will hold the intensity of reflected ambient and diffuse light
	 glm::vec3 ambient, diffuse;

	 //Loop through each pixel...
	 for(int y=0; y<imageHeight; y++)
	 {
	 	 for(int x=0; x<imageWidth; x++)
	 	 {
	 	 	 //Normalized pixel coordinates, remapped to range between [-1, 1].
	 	 	 //Formula for dirY differs because we want to swap the y-axis so
	 	 	 //that positive y-values of coordinates are on the upper half of
	 	 	 //the image plane with respect to the center of the plane.
	 	 	 dirX = (2.0f*(x+0.5f)/imageWidth)-1.0f;
	 	 	 dirY = 1.0f-(2.0f*(y+0.5f)/imageHeight);

	 	 	 //Account for aspect ratio and field of view
	 	 	 dirX = dirX*aspectR*glm::tan(glm::radians(fieldOfView));;
	 	 	 dirY = dirY*glm::tan(glm::radians(fieldOfView));

	 	 	 //Set the ray direction in world space. We can calculate the distance
	 	 	 //from the camera to the image plane by using the FoV and half height
	 	 	 //of the image plane, tan(fov/2) = a/d => d = a/tan(fov/2)

	 	 	 //r.setDirection(glm::vec3(dirX, dirY, -1.0f)-r.origin());
	 	 	 r.setDirection(glm::vec3(dirX, dirY, -1.0f/glm::tan(glm::radians(fieldOfView)))-r.origin());
  
	 	 	 //Will hold object with closest intersection
	 	 	 Object* currentObject = NULL;

	 	 	 //Will hold solution of ray-object intersection
	 	 	 float closestHit = std::numeric_limits<float>::infinity(), newHit = std::numeric_limits<float>::infinity();

	 	 	 //For each object...
	 	 	 for(int i=0; i<nrOfObjects; i++)
	 	 	 {
	 	 	 	 //If ray intersect object...
	 	 	 	 if(objects[i]->intersection(r, newHit))
	 	 	 	 {
	 	 	 	 	 //If intersection is closer then previous intersection
	 	 	 	 	 if(newHit<closestHit)
	 	 	 	 	 {
	 	 	 	 	 	 //Update closest intersection and corresponding object
	 	 	 	 	 	 closestHit = newHit;
	 	 	 	 	 	 currentObject = objects[i];
	 	 	 	 	 }
	 	 	 	 }
	 	 	 }

	 	 	 //If an object has been intersected...
	 	 	 if(currentObject != NULL)
	 	 	 {
	 	 	 	 //Get intersection point
	 	 	 	 glm::vec3 intersectionPoint = r.origin()+closestHit*r.direction();

	 	 	 	 //Get light direction and normal
	 	 	 	 glm::vec3 lightDirection = glm::normalize(light1.position()-intersectionPoint);
	 	 	 	 glm::vec3 normal = glm::normalize(currentObject->normal(intersectionPoint));
  
	 	 	 	 //Factor affecting reflected diffuse light
	 	 	 	 float LdotN = glm::clamp(glm::dot(lightDirection, normal), 0.0f, 1.0f);

	 	 	 	 //Get diffuse and ambient color of object
	 	 	 	 ambient = currentObject->diffuse()*light1.ambient();
	 	 	 	 diffuse =  currentObject->diffuse()*LdotN*light1.diffuse();
  
	 	 	 	 //Final color value of pixel
	 	 	 	 glm::vec3 RGB = ambient+diffuse;
  
	 	 	 	 //Make sure color values are clamped between 0-255 to avoid artifacts
	 	 	 	 RGB = glm::clamp(RGB*255.0f, 0.0f, 255.0f);

	 	 	 	 //Set color value to pixel
	 	 	 	 testImage.setPixel(x, y, RGB);
	 	 	 }
	 	 	 else
	 	 	 {
	 	 	 	 //No intersection, set black color to pixel.
	 	 	 	 testImage.setPixel(x, y, glm::vec3(0.0f));
	 	 	 }
	 	 }
	 }

	 //Write out the image
	 testImage.writeImage();
}

I do want to mention that I have been playing around a bit with the parameters involved in my code. I tried putting the position of the ray/camera (CoP) further away from the image plane. As expected the scene zooms in as the field of view is supposed to become smaller. This did obviously not solve anything so I increased the field of view parameter (to neutralize the zoomed-in effect). Doing so removed the distortion (see second image) but this was just a temporal solution because I moved the blue sphere further out from the center of the scene which resulted in it being stretched again with the new settings (see third image).

This is the settings used for the three images:

Image 1:
float fieldOfView = 60.0f
Ray r(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f));

Image 2:
float fieldOfView = 85.0f
Ray r(glm::vec3(0.0f, 0.0f, 20.0f), glm::vec3(0.0f));

Image 3:
float fieldOfView = 85.0f
Ray r(glm::vec3(0.0f, 0.0f, 20.0f), glm::vec3(0.0f));

Again I am totally lost in how to solve this problem. I don't even understand why the new settings I tried removed the distortion (temporarily). I realize some distortion will happen regardless but this seems extreme, furthermore the fact that I had to increase my fov to such a high value above gives me a bad feeling and makes me suspect my code might be at fault somewhere. Any help and explanation will be appreciated Posted Image

Attached Thumbnails

  • image1.png
  • image2.png
  • image3.png


Sponsor:

#2 Álvaro   Crossbones+   -  Reputation: 13912

Like
1Likes
Like

Posted 15 December 2012 - 08:35 PM

The effect is a mismatch between the FOV you used to generate the image and the conditions under which you are looking at the image. If you move your eye close enough to the screen, the projection will seem correct. The way I am sitting in front of my laptop right now, the horizontal span of the screen covers an angle of about 24 degrees from where I stand. If your FOV is much larger than that, I will see the distortion.


Exactly what FOV to use is for you to decide. If you are trying to create photo-realistic images, perhaps you should use FOVs that are typical for photography.

#3 Suen   Members   -  Reputation: 160

Like
0Likes
Like

Posted 15 December 2012 - 10:18 PM

The effect is a mismatch between the FOV you used to generate the image and the conditions under which you are looking at the image. If you move your eye close enough to the screen, the projection will seem correct. The way I am sitting in front of my laptop right now, the horizontal span of the screen covers an angle of about 24 degrees from where I stand. If your FOV is much larger than that, I will see the distortion.


Thanks for the explanation but I don't quite to understand how to connect everything. Basically you mean to say that this distortion is pretty much dependent on my viewing condition? Wouldn't that mean, if I understood it correct, that the distance between me and the PC screen and the amount of degrees it occupy of my FOV affecs whether I see the image distorted or not? And a stupid question for that, how would I account for it in my code?

Exactly what FOV to use is for you to decide. If you are trying to create photo-realistic images, perhaps you should use FOVs that are typical for photography.


What are some commmon FOVs for photography? And assuming I would use such FOVs, wouldn't the image still have the possibility of appearing distorted to me on my PC screen due to being dependent on my viewing conditions?

#4 Álvaro   Crossbones+   -  Reputation: 13912

Like
0Likes
Like

Posted 16 December 2012 - 06:48 AM


The effect is a mismatch between the FOV you used to generate the image and the conditions under which you are looking at the image. If you move your eye close enough to the screen, the projection will seem correct. The way I am sitting in front of my laptop right now, the horizontal span of the screen covers an angle of about 24 degrees from where I stand. If your FOV is much larger than that, I will see the distortion.


Thanks for the explanation but I don't quite to understand how to connect everything. Basically you mean to say that this distortion is pretty much dependent on my viewing condition?

Yes, that's correct.

Wouldn't that mean, if I understood it correct, that the distance between me and the PC screen and the amount of degrees it occupy of my FOV affecs whether I see the image distorted or not?

You got it again.

And a stupid question for that, how would I account for it in my code?

You can't. In some sense, it's not your code's problem.


Exactly what FOV to use is for you to decide. If you are trying to create photo-realistic images, perhaps you should use FOVs that are typical for photography.


What are some commmon FOVs for photography?

This Wikipedia page can give you some idea.

And assuming I would use such FOVs, wouldn't the image still have the possibility of appearing distorted to me on my PC screen due to being dependent on my viewing conditions?

Yes, and that's a problem shared with photography or movies. But we are so used to looking at photographs and movies that I don't think anyone would complain.

#5 Suen   Members   -  Reputation: 160

Like
0Likes
Like

Posted 16 December 2012 - 08:34 AM



The effect is a mismatch between the FOV you used to generate the image and the conditions under which you are looking at the image. If you move your eye close enough to the screen, the projection will seem correct. The way I am sitting in front of my laptop right now, the horizontal span of the screen covers an angle of about 24 degrees from where I stand. If your FOV is much larger than that, I will see the distortion.


Thanks for the explanation but I don't quite to understand how to connect everything. Basically you mean to say that this distortion is pretty much dependent on my viewing condition?

Yes, that's correct.

Wouldn't that mean, if I understood it correct, that the distance between me and the PC screen and the amount of degrees it occupy of my FOV affecs whether I see the image distorted or not?

You got it again.

And a stupid question for that, how would I account for it in my code?

You can't. In some sense, it's not your code's problem.


Exactly what FOV to use is for you to decide. If you are trying to create photo-realistic images, perhaps you should use FOVs that are typical for photography.


What are some commmon FOVs for photography?

This Wikipedia page can give you some idea.

And assuming I would use such FOVs, wouldn't the image still have the possibility of appearing distorted to me on my PC screen due to being dependent on my viewing conditions?

Yes, and that's a problem shared with photography or movies. But we are so used to looking at photographs and movies that I don't think anyone would complain.


So my conclusion is that if I want to look at the rendered image with minimal distortion on my PC(!) I will need to tweak my parameters in the code until I find some kind of sweet spot for it (for example that would be the settings in second image in my viewing condition as the distortion is not that noticable)?

It does kind of explain why I can't account for it in my code then since viewing conditions from one PC station to another can/will vary and accounting for all these would be ridiculous.

But how does this then work in the game industry then? Whether you are playing on some game console or on your PC, does the developer follow a certain standard to determine what the best FOV is to use? By standard I mean that it might be (random example here) a standard to sit two meters away from the TV when playing a game whereas the TV of size X will cover Y degrees of your FOV.

#6 Álvaro   Crossbones+   -  Reputation: 13912

Like
0Likes
Like

Posted 16 December 2012 - 12:35 PM

But how does this then work in the game industry then?


The FOV used in a game depends primarily on the genre. FPS need large FOVs to be playable, and distortion is expected. The first thing I found on Google describes how Valve does it. If you are making an RTS with overhead camera (like Warcraft III), you can probably use a smaller FOV. Isometric projection can be seen as the limit of perspective projection where FOV is 0, and even that is acceptable.

There is a Wikipedia page on FOV in video games, but it doesn't seem very informative.

#7 Bacterius   Crossbones+   -  Reputation: 9282

Like
0Likes
Like

Posted 16 December 2012 - 02:09 PM

Remember that as the field of view tends to zero, you lose all sense of depth (as Alvaro said, the limit is isometric projection, which has no notion of depth). First person shooters need very good depth perception for aiming, so this takes precedence over edge distortions and a high field of view is thus preferred.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#8 Waterlimon   Crossbones+   -  Reputation: 2638

Like
0Likes
Like

Posted 16 December 2012 - 02:55 PM

You could also handle the screen as not just a rectangle i guess. If you treated it as a curved surface (imagine some 5 horizontal monitor setup) it might produce a spherical sphere even on the edge of your view (not sure if that works in practise)

Kind of like those panorama pics you can make with cameras and some phones.

o3o


#9 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 17 December 2012 - 02:13 AM

You could also handle the screen as not just a rectangle i guess. If you treated it as a curved surface (imagine some 5 horizontal monitor setup) it might produce a spherical sphere even on the edge of your view (not sure if that works in practise)

Yes that works in practice. Kind of hard with a rasterizer as you need pretty well tessellated geometry, but for a ray tracer it should be pretty easy to set up.

#10 Suen   Members   -  Reputation: 160

Like
0Likes
Like

Posted 17 December 2012 - 03:26 PM




But how does this then work in the game industry then?

The FOV used in a game depends primarily on the genre. FPS need large FOVs to be playable, and distortion is expected. The first thing I found on Google describes how Valve does it. If you are making an RTS with overhead camera (like Warcraft III), you can probably use a smaller FOV. Isometric projection can be seen as the limit of perspective projection where FOV is 0, and even that is acceptable. There is a Wikipedia page on FOV in video games, but it doesn't seem very informative.
Remember that as the field of view tends to zero, you lose all sense of depth (as Alvaro said, the limit is isometric projection, which has no notion of depth). First person shooters need very good depth perception for aiming, so this takes precedence over edge distortions and a high field of view is thus preferred.
This sound interesting and nothing I really thought about. I always thought that the only thing the field of view does is that it works as a scaling factor (common example would be the perspective projection matrix used in OpenGL etc.). A smaller fov, from what I understand, would mean that my objects get scaled bigger which would be something similar to zooming in with a camera. How does this affect the depth perception of the scene?
You could also handle the screen as not just a rectangle i guess. If you treated it as a curved surface (imagine some 5 horizontal monitor setup) it might produce a spherical sphere even on the edge of your view (not sure if that works in practise) Kind of like those panorama pics you can make with cameras and some phones.




You could also handle the screen as not just a rectangle i guess. If you treated it as a curved surface (imagine some 5 horizontal monitor setup) it might produce a spherical sphere even on the edge of your view (not sure if that works in practise)

Yes that works in practice. Kind of hard with a rasterizer as you need pretty well tessellated geometry, but for a ray tracer it should be pretty easy to set up.
This is a proposal that I've seen in a few places which could solve the problem. I believe I read about this in another thread here in gamedev when searching for information about my problem where the same idea was proposed but according to people in that thread a curved surface would result in other artifacts in the scene instead. Still being able to implement a curved surface just to see the difference between it and my current screen would be interesting except I have no idea where to even start let alone how I could implement one. I will have to search more about it, relevant links to this would be appreciated.

I've managed to minimize the effect of the distortion somewhat (well it's really only minimized for specific settings). The biggest problem I failed to notice was that the camera position was way too close to the scene objects. This would cause the projectors/rays to be spread in a really wide angle for objects close to the edge which would just increase the distortion the closer the camera got to them. Moving the camera and the view plane further away from the objects did lessen the distortion significantly. Of course the problem now was that I was "zooming" out and so my objects became smaller but this is where I used the field of view to correct that by decreasing it. Not sure if this is a correct approach but it did give me less distortion (see image 1).

I think it might as well be good to ask it in this topic instead of creating a new one. If you look at the image rendered by my ray tracer you can see that I've merely done diffuse shading. What is bothering me is how bright the sphere becomes at the area where they receive most light. The changes are quite noticable, basically it looks like I've just added one big white spot on each sphere and this effect seem to get more visible the smaller my sphere is (which I suppose is expected). I realize much of this effect happens because I've added an ambient component to my calculations but it seems somewhat severe. I was thinking that accounting for light attenuation might change this but I'm not entirely sure about that although I plan to implement it regardless. Interesting enough I decided to compare these rendered spheres with a sphere I rendered using OpenGL and GLSL (see image 2) before, again diffuse shading with an ambient component as well. The result I received from using OpenGL is what I would expect, as the color there seem to smooth out much better. Of course I do understand that the process of rendering an image does differ between the two but the given results caught my attention. Either way it made me wonder how the same shading method could be produce such different results? - Fixed, see below

Edited by Suen, 29 December 2012 - 10:29 AM.


#11 Suen   Members   -  Reputation: 160

Like
0Likes
Like

Posted 28 December 2012 - 09:13 PM

The above was fixed by tweaking some of the values (for now at least). I've hit another problem (unrelated to the topic) but I'm not sure if a whole new thread is needed for that as I've created this thread. Either way I'll give it a try:

I'm trying to render triangles. For the moment I just tried rendering one triangle using a geometric solution (inefficient but but it will have to do for now as I'm just experimenting). I create a triangle consisting of three vertices, then perform a ray-triangle intersection by checking first whether the ray intersects the plane which the triangle lies in and then whether it lies within the triangle or not. The code for this is below along with the code that renders my scene.

The problem I have is that the z-values seem to be flipped when I translate my triangle. If I transform the triangle from it's model space to the world space by a simple translation (no orientation involved) I expect +20 in the z-axis to bring the triangle closer to the camera, yet the result is that my triangle is placed further away from the camera. Using -20 brings the triangle closer to the camera. I thought my matrix transformations might be at wrong here so I specified the coordinates of the triangle's vertices in it's local space (model space) with different z-values to see if I would get the same result and I did.

 

Update: After going through my code again I noticed I was calculating the constant D of the plane equation wrong in my triangle intersection code. It should be D = -(N*V) where N is normal of plane and V is any of the triangle's vertices. I forgot to add the minus sign to it. It works as it should now.
 



#12 Bacterius   Crossbones+   -  Reputation: 9282

Like
0Likes
Like

Posted 03 January 2013 - 06:50 AM

This sound interesting and nothing I really thought about. I always thought that the only thing the field of view does is that it works as a scaling factor (common example would be the perspective projection matrix used in OpenGL etc.). A smaller fov, from what I understand, would mean that my objects get scaled bigger which would be something similar to zooming in with a camera. How does this affect the depth perception of the scene?

 

 

I'm not sure how to explain it convincingly, though it does. Here is a rather striking example (higher focal length = lower field of view). Perhaps the easiest way to think of it is to realize that depth is also scaled along with the width and height, which makes it more difficult to evaluate distance based on perspective. At a very small field of view, everything seems to be at the same depth. Another way of thinking of it, is that if a lower field of view is just "zooming in" on a higher field of view picture, then everything in the zoomed-in image is closer to the vanishing point (just by virtue of zooming in, as for ordinary perspective projection the vanishing point is just the center of the image) which also reduces effective depth perception..


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS