How to generate rays in a simple ray tracer/pin hole camera

Started by
6 comments, last by mahendragr 13 years, 11 months ago
Hello guys, Sorry if this is a repost. But i didnt find a proper solution/answer for my question. I want generate rays for my ray tracer. I have the following in hand : 1.Up/right/view vectors of my camera 2.origin of my camera 3.(x,y) - the pixel co ordinates. 4.width and height of my image plane. 5. distance from camera to image plane. I know i shud use field of view, but i'm not able to figure it out. i can set the origin of the ray ray.origin(cam_origin); but i have to set the direction of my camera (how to do it?!)
Advertisement
One way is to calculate the top left corner of the view plane in camera space and then:

-for each sample (i.e. each pixel) you interpolate throught the plane.
-build your ray using normalize(pt - camera_pos) where pt is the point computed in the first step
-transform the ray from camera space to world space (i.e using the inverse of the camera transformation matrix).
CRay
CPinholeCamera::GenerateRay(RealType x, RealType y, RealType rWidth, RealType rHeight) const
{
/*
(right, up, view) vectors are given
m_rFocalLength contains the distance from the eye point to the image
*/
double pi = 3.14159265;
CRay ray;
RealType fovx,fovy;
VectorType3 dir;

ray.SetOrigin(m_v3Eye);
fovx = pi/4;
fovy = rHeight/rWidth * fovx;

u = ((2 * x) - rWidth) / width * tan(fovx);
v = ((2 * y) - rHeight) / height * tan(fovy);
dir = (); // what should dir have here
dir.normalize();
ray.SetDir(dir);


return ray;
};


Thanks for your reply..here is my code..
Hey mahendragr,

I guess by the "Thanks for your reply..here is my code..",
that you're not completely clear on cignox1 said.

Basically, cignox1 suggests that you use the points of the image plane
(rectangle actually), and interpolate the position of these according to your
x,y coords. For instance, xyz_vector ( (x/width) * points_to_the_right ).

If you can already draw bilinearly shaded polygons, i suggest you do this instead:

What i do when i need rectilinear rays, is that i paint the inside of a cube
where the colors represent ray directions. Meaning you have a black, red, green, blue, purple, cyan, yellow and white corner.

This is drawn to an image (same dimensions as your frontbuffer), where the (x,y)
coords of the pixels can easily sample the ray direction.
In OpenGL, this image is a front buffer, which you'll need to associate a texture with. It's probably around the same in D3D.

Naturally, you need to subtract 0.5 from each color component, as backward-fired rays should move, and not stand still.

This will only give you a rectilinear lens, and it's in its simplest instance
much less flexible than computing the rays in the raycaster itself.

Hope this helps
sorry, my intention of pasting the code was to make myself clear. I dont see a reason to interpolate. I JUST want to return a single ray (knowing camera properties and knowing the pixel co-ordinates etc).
Well if you don't feel like interpolating, and you only need a single ray,
i say go with the camera direction for that single ray ;-)

No, really i must be missing something. Don't you want a ray for every (x,y)
on the screen?

Without interpolating and reading from a buffer with precomputed ray directions, i'm not sure what i'd do.

Do you understand what i mean by interpolating between each point of the view plane in accordance with your screen space coordinates?
Dare I say it... casting a single ray could completely solve real time raytracing.

It would be borderline lying, but that pixel would be beautifully raytraced.

@superVGA

Yes i got you. The reason why i dont want to generate all the rays at once is that i'm trying to write a function which is then called inside a loop in some other function.

This topic is closed to new replies.

Advertisement