Sign in to follow this  
faculaganymede

Raytracing & Perspective Questions

Recommended Posts

faculaganymede    145
Raytracing Gurus, I am new to ray tracing and 3D graphics. I am a little confused about ray tracing and perspective projection. I implemented a ray tracer as follows (in C++ from scratch, not using any 3D graphics API's): 1. The ray origin is at (0,0,0) in the sensor's viewing coordinate system, 2. The image plane is placed at distance d, perpendicular to sensor's line of sight (i.e. the line from sensor to an aim point in the 3D scene) 3. Everything (including vertices, normals, texture uv coordinates) in the scene is transformed to the sensor's viewing coordinate system 4. For each pixel (x,y) in the image (on image plane), a ray is created from (0,0,0) to (x,y,d), where z=d, the look-at direction 5. Each ray finds an intersection in the 3D scene (in the sensor viewing coordinate system), and a value is assigned to the associated image pixel 6. Thus, an image is formed and saved to an image file. This image does not need to be displayed on the screen, so I don't need to do any viewport clipping or screen mapping transformations (am I right!?) Please let me know if you find any errors so far. When the ray tracer is implemented as described, I thought the z=d value takes care of the perspective stuff (right!?). But, I am confused by what I read on http://cs.marlboro.edu/term/spring04/graphics/perspective_raytrace.txt. The author seems to show that it's necessary to convert each image pixel (x,y) to (u,v) using perspective equations, as shown below. Could someone please help explain?

Share this post


Link to post
Share on other sites
Gluc0se    146
What you've described will complete the perspective transformation just fine. I'm thinking what the author is describing there is the inverse of what you're doing, which is what the graphics pipeline does. He has a point in 3D space, and wants to know what (u,v) pixel it 'lights' up. While your ray tracer shoots a ray through a (u,v) pixel and determines what objects or points in 3D space it hits, to get its color.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
Perspective calculations are more useful for scanline renderers. However, using a scanline renderer for first generation rays can speed up the overall process.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this