Jump to content

  • Log In with Google      Sign In   
  • Create Account


Camera for Raytracing


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
3 replies to this topic

#1 StanLee   Members   -  Reputation: 117

Like
0Likes
Like

Posted 13 March 2013 - 10:48 AM

Hello,

 

I am working on a raytracer at the moment and I come across some issues with the camera model. I just don't seem to manage the calculation of the direction vector of my rays.

 

Let's say we have given an image resolution of resX x resY, a position pos of the camera, the up vector, the direction vector dir (the direction in which the camera is looking) and a field of view for the horizontal and vertical component fovX and fovY. With the help of these values I can manage to calculate the focal length and thus the width of my pixels.

fovx.jpg

 

But when I do the same for the height I'll get another focal length.

fovy.jpg

 

There is something I must be missing because the focal length determines the distance between my camera position and the image plane and thus must be unique.

But let's suppose that I've got one unique focal length, then calculating the direction of a ray for the screen coordinates (x,y) should be done with this formula.

formel.jpg

I adjust the direction vector in such a way, that he is always in the center of a pixel.

 

Unfortunately applying this to my raytracer yields absolutely no results. sad.png


Edited by StanLee, 13 March 2013 - 10:55 AM.


Sponsor:

#2 Bacterius   Crossbones+   -  Reputation: 8278

Like
1Likes
Like

Posted 13 March 2013 - 06:19 PM

I think what the diagrams indicate is the x-focal length and y-focal length, such that the actual focal length should be equal to sqrt(fx^2 + fy^2), by the Pythagorean Theorem. Though I don't fully understand the diagrams.

 

What I like to do is calculate only the corners of the focal plane, and interpolate the focal point for an arbitrary pixel using bilinear interpolation. It's clearer and a bit faster. Then the camera direction is just the focal point minus the camera position (normalized, of course). You might find some of my code useful: this, along with this.


Edited by Bacterius, 13 March 2013 - 06:20 PM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#3 StanLee   Members   -  Reputation: 117

Like
1Likes
Like

Posted 14 March 2013 - 04:09 AM

Thanks a lot! Your code helped me to figure out that I totally forgot to normalize the scale of my image plane.
Before normalization my image plane was centered around (0,0,500) and the top left corner around (-512, 384, 500) in world space. Without adjustments of the object positions and sizes absolutely nothing could be seen.
Now I used your proposed equation for the focal length f:

 

Everything works fine by now though I still have a question. When I choose a wide angle for the horizontal field of view, let's say 120°, I see spheres getting oblong at the border of the screen.  Is this the so called "fisheye"-effect due to a wide field of view?



#4 Bacterius   Crossbones+   -  Reputation: 8278

Like
1Likes
Like

Posted 14 March 2013 - 04:20 AM

Everything works fine by now though I still have a question. When I choose a wide angle for the horizontal field of view, let's say 120°, I see spheres getting oblong at the border of the screen.  Is this the so called "fisheye"-effect due to a wide field of view?

Yes, a flat image plane doesn't work well at high fields of view because it tends to cause extreme distortion at the edges. A flat image plane works best for fields of view between 30 and 80 degrees, in my experience. For such wide-angle shots you'll want a different kind of camera, probably spherical or fisheye (though distortion is unavoidable, but at least you'll be expecting it in that case).


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS