ray tracing questions

Started by
14 comments, last by Medium9 14 years, 2 months ago
Hey all, I'm starting to have more and more in ray tracing, I've read some things and I have some questions: 1. Is ray tracing the process of creating 2D images or creating more realistic visual effects? I mean, does it only calculates things with rays and print it on screen or does the scene has to have a few polygons, then cast the rays to get effects? 2. How does a ray tracer works exactly? 3. What's the difference between "ray casting" and "ray tracing" if there is any? Thanks in advance for your answer.
Advertisement
Quote:Original post by Nevadaes
1. Is ray tracing the process of creating 2D images or creating more realistic visual effects? I mean, does it only calculates things with rays and print it on screen or does the scene has to have a few polygons, then cast the rays to get effects?

Raytracing is a way of producing a 2D image out of a 3D scene. It's an alternative to rasterization. It *can* be used for calculating realistic visual effects, but it doesn't have to be.

Quote:
2. How does a ray tracer works exactly?

In its most primitive form, a ray tracer sends out a ray for every pixel of the screen. Based on the intersection of this ray within the scene (or lack thereof), a color is computed for that pixel.

Quote:
3. What's the difference between "ray casting" and "ray tracing" if there is any?


Google turns up this result.
Quote: 1. Is ray tracing the process of creating 2D images or creating more realistic visual effects? I mean, does it only calculates things with rays and print it on screen or does the scene has to have a few polygons, then cast the rays to get effects?


Raytracing is an algorithm to render 3D scenes to images. Simply put, this means that you provide a 3D scene as input to your raytracer, and the result is a 2D image which represents your scene. Much in the same way as you would take a photograph of something (in fact, the process of taking a photograph more or less inspired the raytracing algorithm).

Quote: 2. How does a ray tracer works exactly?


This is a question that is not easily answered, since the answer would have to cover a lot of aspects. On a high level, a simple raytracer would consist of the following steps:

For each pixel in the resulting image:1. calculate new ray (in world coordinates)2. for each object in the scene2.1 check if ray intersects with object2.1a If Yes: calculate object color at point of intersection2.1b If No: return background color


If you want to know more about the raytracing algorithm, I suggest you google a bit. There are lots of websites that explain the basics to get started. This website provides a decent introduction : http://www.siggraph.org/education/materials/HyperGraph/raytrace/rtrace0.htm

Quote: 3. What's the difference between "ray casting" and "ray tracing" if there is any?


The highlevel algorithm above is in fact a raycaster, since that algorithm only calculates the "primary" rays, going from the camera into the scene.

With raytracing, the algorithm wouldn't stop at step 2.1a. Instead it would calculate a new ray, which starts at the intersection point with the object and goes back into the scene. This "secondary" ray can then hit another object and from there on another ray can be calculate and so forth. Raytracing is thus a recursive algorithm, where raycasting is not. This recursion is necessary to render cool effects like caustics, shadows, reflection, refraction etc.
First of all, read this.

For the basic definition of raytracing, a scene is not required to be build with polygons: any data structure that can be tested for collision agains a ray can be used with raytracing (mathematical surfaces, voxel, fractals and so on). Most of the times you also need to calculate normals (for lighting) and uv coordinates (for texturing).

In short:
1) Shoot a ray through a pixel toward the scene. Does it intersect a surface? If yes, go to 2, else simply return the background color.

2) An intersection has been found. For each light shoot a ray, starting from the intersection point toward the light. Test it against the scene. Does it intersect an object? If yes, do nothing and move to next light (the ray wont contribute as an object is casting a shadow). If nothing is between the point and the light, go to 3.

3) You must basically handle 3 components: diffuse, specular and transmission. Some of the light will be reflected all around the intersection point, some in the specular direction (mirror) and some will be transmitted throught the surface (transparency). Any of these effects has its own set of formulas. For both reflections and transmission you must shoot another ray (as you did for the primary one) and add the resulting colors to the diffuse value.

This is the most basic for RT, often also called "Witthed style", but there is much more.
So, if I understood correctly, a ray tracer based engine would make "fake 3D", if I can sum it up like that, right? It would read stored data in, for example, input files and use rays to determine at which color each pixel should be drawn to the screen. So, as you said, no polygonal rendering is needed as it gives the illusion of 3D with a drawn 2D image by rays.

So if the ray tracer scans a voxel-based file, it would be able to draw in 2D a 3D representation of the file.

Is it how it works, or did I misunderstood something here?
Quote:Original post by Nevadaes
So, if I understood correctly, a ray tracer based engine would make "fake 3D", if I can sum it up like that, right? It would read stored data in, for example, input files and use rays to determine at which color each pixel should be drawn to the screen. So, as you said, no polygonal rendering is needed as it gives the illusion of 3D with a drawn 2D image by rays.

So if the ray tracer scans a voxel-based file, it would be able to draw in 2D a 3D representation of the file.

Is it how it works, or did I misunderstood something here?


Yup.

Though when you think about it, it's "fake 3D" just as everything else displayed on your monitor. Games that use rasterization (which is the alternative to ray tracing, it transforms polygons directly onto the screen) are also "fake 3D".

A ray tracer can render non-polygonal objects, but it can also render polygonal objects by intersecting with their triangles. Essentially, anything that a rasterizer can draw, so can a ray tracer (albeit, slower; that's why rasterization is still the norm for games).
I fear you are a bit confused... what do you mean with "no polygonal rendering is needed as it gives the illusion of 3D with a drawn 2D image by rays."? Evry 3d renderer draws a 2d image... what a raytracer does is rendering the 3d data (possibily the same data of your game of choice) using a different algorithm than rasterization.
If we really had to say which one is 'less fake', then I would say that RT is 'better' than rasterization as it performs a rough simulation of the light travel while rasterization does not.

And there are yet more algorithms: path tracing, reyes, ray casting just to name few. They are used where they can give the max benefits: path tracing (well, not really as it is too slow, but bidirectional path tracing or metropolis light transport instead) where you need the highest quality, raytracing where you need a good compromise between quality and speed (or toghether with GI algorithms like in photon mapping or radiosity), reyes as used by pixar in Renderman, raycasting like Doom and Doom 2 and so on. Some of them are closer than others to the true behaviour of the light...
Ok I see, thanks for answering guys.

I've read than current GPUs cannot help doing calculations in a ray tracer and that only the CPU can make the calculations, is that true? And why, if it is?
Quote:Original post by Nevadaes
Ok I see, thanks for answering guys.

I've read than current GPUs cannot help doing calculations in a ray tracer and that only the CPU can make the calculations, is that true? And why, if it is?


GPUs can be used for ray tracers, no doubt. Google GPU-accelerated ray tracers and you'll land on many, many results. It's just that the rasterization is implemented in hardware, that's what the GPU is literally built to do. It'd be even better for ray tracing if it was built with ray tracing in mind - for example, hardware-based intersection calculations, stuff like that.
Quote:Original post by nullsquared
Quote:Original post by Nevadaes
I've read than current GPUs cannot help doing calculations in a ray tracer and that only the CPU can make the calculations, is that true? And why, if it is?
GPUs can be used for ray tracers, no doubt. Google GPU-accelerated ray tracers and you'll land on many, many results. It's just that the rasterization is implemented in hardware, that's what the GPU is literally built to do. It'd be even better for ray tracing if it was built with ray tracing in mind - for example, hardware-based intersection calculations, stuff like that.
The situation is actually somewhat worse than you have portrayed. Ray tracing is inherently a tree-traversal operation, where a single primary ray recursively splits into many rays with each intersection.

To implement this, you require branching logic, and GPUs are not really designed with branching in mind. The last few generations of GPUs do support branching in shaders, but it generally comes at a considerable performance penalty.

By comparison, general purpose CPUs have always been designed with efficient branching in mind, and they are pretty good at it.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

This topic is closed to new replies.

Advertisement