# ray tracing questions

## Recommended Posts

Hey all, I'm starting to have more and more in ray tracing, I've read some things and I have some questions: 1. Is ray tracing the process of creating 2D images or creating more realistic visual effects? I mean, does it only calculates things with rays and print it on screen or does the scene has to have a few polygons, then cast the rays to get effects? 2. How does a ray tracer works exactly? 3. What's the difference between "ray casting" and "ray tracing" if there is any? Thanks in advance for your answer.

##### Share on other sites
Quote:
 Original post by Nevadaes1. Is ray tracing the process of creating 2D images or creating more realistic visual effects? I mean, does it only calculates things with rays and print it on screen or does the scene has to have a few polygons, then cast the rays to get effects?

Raytracing is a way of producing a 2D image out of a 3D scene. It's an alternative to rasterization. It *can* be used for calculating realistic visual effects, but it doesn't have to be.

Quote:
 2. How does a ray tracer works exactly?

In its most primitive form, a ray tracer sends out a ray for every pixel of the screen. Based on the intersection of this ray within the scene (or lack thereof), a color is computed for that pixel.

Quote:
 3. What's the difference between "ray casting" and "ray tracing" if there is any?

##### Share on other sites
Quote:
 1. Is ray tracing the process of creating 2D images or creating more realistic visual effects? I mean, does it only calculates things with rays and print it on screen or does the scene has to have a few polygons, then cast the rays to get effects?

Raytracing is an algorithm to render 3D scenes to images. Simply put, this means that you provide a 3D scene as input to your raytracer, and the result is a 2D image which represents your scene. Much in the same way as you would take a photograph of something (in fact, the process of taking a photograph more or less inspired the raytracing algorithm).

Quote:
 2. How does a ray tracer works exactly?

This is a question that is not easily answered, since the answer would have to cover a lot of aspects. On a high level, a simple raytracer would consist of the following steps:

For each pixel in the resulting image:1. calculate new ray (in world coordinates)2. for each object in the scene2.1 check if ray intersects with object2.1a If Yes: calculate object color at point of intersection2.1b If No: return background color

If you want to know more about the raytracing algorithm, I suggest you google a bit. There are lots of websites that explain the basics to get started. This website provides a decent introduction : http://www.siggraph.org/education/materials/HyperGraph/raytrace/rtrace0.htm

Quote:
 3. What's the difference between "ray casting" and "ray tracing" if there is any?

The highlevel algorithm above is in fact a raycaster, since that algorithm only calculates the "primary" rays, going from the camera into the scene.

With raytracing, the algorithm wouldn't stop at step 2.1a. Instead it would calculate a new ray, which starts at the intersection point with the object and goes back into the scene. This "secondary" ray can then hit another object and from there on another ray can be calculate and so forth. Raytracing is thus a recursive algorithm, where raycasting is not. This recursion is necessary to render cool effects like caustics, shadows, reflection, refraction etc.

##### Share on other sites

For the basic definition of raytracing, a scene is not required to be build with polygons: any data structure that can be tested for collision agains a ray can be used with raytracing (mathematical surfaces, voxel, fractals and so on). Most of the times you also need to calculate normals (for lighting) and uv coordinates (for texturing).

In short:
1) Shoot a ray through a pixel toward the scene. Does it intersect a surface? If yes, go to 2, else simply return the background color.

2) An intersection has been found. For each light shoot a ray, starting from the intersection point toward the light. Test it against the scene. Does it intersect an object? If yes, do nothing and move to next light (the ray wont contribute as an object is casting a shadow). If nothing is between the point and the light, go to 3.

3) You must basically handle 3 components: diffuse, specular and transmission. Some of the light will be reflected all around the intersection point, some in the specular direction (mirror) and some will be transmitted throught the surface (transparency). Any of these effects has its own set of formulas. For both reflections and transmission you must shoot another ray (as you did for the primary one) and add the resulting colors to the diffuse value.

This is the most basic for RT, often also called "Witthed style", but there is much more.

##### Share on other sites
So, if I understood correctly, a ray tracer based engine would make "fake 3D", if I can sum it up like that, right? It would read stored data in, for example, input files and use rays to determine at which color each pixel should be drawn to the screen. So, as you said, no polygonal rendering is needed as it gives the illusion of 3D with a drawn 2D image by rays.

So if the ray tracer scans a voxel-based file, it would be able to draw in 2D a 3D representation of the file.

Is it how it works, or did I misunderstood something here?

##### Share on other sites
Quote:
 Original post by NevadaesSo, if I understood correctly, a ray tracer based engine would make "fake 3D", if I can sum it up like that, right? It would read stored data in, for example, input files and use rays to determine at which color each pixel should be drawn to the screen. So, as you said, no polygonal rendering is needed as it gives the illusion of 3D with a drawn 2D image by rays.So if the ray tracer scans a voxel-based file, it would be able to draw in 2D a 3D representation of the file.Is it how it works, or did I misunderstood something here?

Yup.

Though when you think about it, it's "fake 3D" just as everything else displayed on your monitor. Games that use rasterization (which is the alternative to ray tracing, it transforms polygons directly onto the screen) are also "fake 3D".

A ray tracer can render non-polygonal objects, but it can also render polygonal objects by intersecting with their triangles. Essentially, anything that a rasterizer can draw, so can a ray tracer (albeit, slower; that's why rasterization is still the norm for games).

##### Share on other sites
I fear you are a bit confused... what do you mean with "no polygonal rendering is needed as it gives the illusion of 3D with a drawn 2D image by rays."? Evry 3d renderer draws a 2d image... what a raytracer does is rendering the 3d data (possibily the same data of your game of choice) using a different algorithm than rasterization.
If we really had to say which one is 'less fake', then I would say that RT is 'better' than rasterization as it performs a rough simulation of the light travel while rasterization does not.

And there are yet more algorithms: path tracing, reyes, ray casting just to name few. They are used where they can give the max benefits: path tracing (well, not really as it is too slow, but bidirectional path tracing or metropolis light transport instead) where you need the highest quality, raytracing where you need a good compromise between quality and speed (or toghether with GI algorithms like in photon mapping or radiosity), reyes as used by pixar in Renderman, raycasting like Doom and Doom 2 and so on. Some of them are closer than others to the true behaviour of the light...

##### Share on other sites
Ok I see, thanks for answering guys.

I've read than current GPUs cannot help doing calculations in a ray tracer and that only the CPU can make the calculations, is that true? And why, if it is?

##### Share on other sites
Quote:
 Original post by NevadaesOk I see, thanks for answering guys. I've read than current GPUs cannot help doing calculations in a ray tracer and that only the CPU can make the calculations, is that true? And why, if it is?

GPUs can be used for ray tracers, no doubt. Google GPU-accelerated ray tracers and you'll land on many, many results. It's just that the rasterization is implemented in hardware, that's what the GPU is literally built to do. It'd be even better for ray tracing if it was built with ray tracing in mind - for example, hardware-based intersection calculations, stuff like that.

##### Share on other sites
Quote:
Original post by nullsquared
Quote:
 Original post by NevadaesI've read than current GPUs cannot help doing calculations in a ray tracer and that only the CPU can make the calculations, is that true? And why, if it is?
GPUs can be used for ray tracers, no doubt. Google GPU-accelerated ray tracers and you'll land on many, many results. It's just that the rasterization is implemented in hardware, that's what the GPU is literally built to do. It'd be even better for ray tracing if it was built with ray tracing in mind - for example, hardware-based intersection calculations, stuff like that.
The situation is actually somewhat worse than you have portrayed. Ray tracing is inherently a tree-traversal operation, where a single primary ray recursively splits into many rays with each intersection.

To implement this, you require branching logic, and GPUs are not really designed with branching in mind. The last few generations of GPUs do support branching in shaders, but it generally comes at a considerable performance penalty.

By comparison, general purpose CPUs have always been designed with efficient branching in mind, and they are pretty good at it.

##### Share on other sites
Quote:
 Original post by swiftcoderTo implement this, you require branching logic,

No, you don't; you can hard code specific interactions. For example, every ray spawns one reflection ray and one shadow ray - no branching whatsoever. Obviously, this is very restricted, but it goes to prove that ray tracing is indeed very much possible, and GPUs are very good at crunching the math.

Quote:
 and GPUs are not really designed with branching in mind

Of course, that is true. And it's also true that branching comes into major play with ray tracing. But ray tracing is a parallel process - each ray can be processed completely separately from the rest. This means that it's perfectly suited for GPUs.

##### Share on other sites
Quote:
 Original post by NevadaesOk I see, thanks for answering guys. I've read than current GPUs cannot help doing calculations in a ray tracer and that only the CPU can make the calculations, is that true? And why, if it is?

GPU not only can help, but are most probably the future of RT:
V-ray
Mental ray
NVidia
Luxrender
Octane

Just to name some of the most important projects. As a matter of fact, GPU give much more power for RT at the same price. Most probably they would be even faster if they were designed for RT, but as GPGPU is becoming really important in many fields, GPU are going to handle branching and so on better.

##### Share on other sites
Quote:Original post by cignox1
Quote:Original post by Nevadaes Ok I see, thanks for answering guys. I've read than current GPUs cannot help doing calculations in a ray tracer and that only the CPU can make the calculations, is that true? And why, if it is?
GPU not only can help, but are most probably the future of RT: V-ray Mental ray NVidia Luxrender Octane Just to name some of the most important projects. As a matter of fact, GPU give much more power for RT at the same price. Most probably they would be even faster if they were designed for RT, but as GPGPU is becoming really important in many fields, GPU are going to handle branching and so on better.

Haven't you been following the discussions on ompf? GPUs give maybe a 50% boost in comparison to a similar SIMD, CPU multi-core implementation. GPUs are overrated. And no I don't think branching will be there anytime soon. It kind of kills the whole FPU-parallelism idea of a GPU, doesn't it? Finally there is nothing magical about a GPU. It's made out of the same silicon, same kind of components and same kind of engineering. What you trade off in not being able to branch you gain in speed. Unfortunately a non-toy example of ray tracing is branched. Tree-building, traversal, shading. And memory concerns. Some scenes require 8GB in system memory. How will that fit on a cute lil' GPU? When you put several graphics cards to work together you do have an advantage and that is the whole idea with newer GPU implementations. Those are not consumer boxes and are not cheap. You can also just buy a server board and plug in 8 quads for example giving you 32 cores. Might be cheaper at least if development costs are concerned and that server boards are not insanely expensive. Just found this one: A must-see http://helmer.sfe.se/[Edited by - spinningcube on February 19, 2010 10:35:27 AM]

Restored post contents from history.

##### Share on other sites
I can't elaborate further over the technical details of GPU programming (I still have too much to do with my CPU raytracer) but 50% would not be bad, excpecially since a GPU was made and optimized for other tasks. In addition, I bet that 1000$of GPUs beat 1000$ of CPUs.

As I see it, those RT developers know that they can sell more licenses if they can offer faster raytracers, and they discovered that GPU have much to give. GPU vendors can sell more products if they can reach that new market, so they will add new features to enable fast RT (in addition, they can say to have the fastest RT hw on the market in the benchmarks :-))

I don't see how that trend can be stopped...

##### Share on other sites
Quote:
Original post by nullsquared
Quote:
 Original post by swiftcoderTo implement this, you require branching logic,
No, you don't; you can hard code specific interactions. For example, every ray spawns one reflection ray and one shadow ray - no branching whatsoever.
This is what I was referring to as branching.

If you were programming this on the CPU, you would either use recursion to process the additional 2 rays, or place them in a queue to be processed next.

On the GPU, you can't do either of these things in a simple manner. You can't realistically recurse within a vertex/fragment shader, and you can't implement a queue except through geometry shaders/histopyramids, which lack flexibility and performance.

As I understand it, OpenCL and CUDA improve on this state of affairs, but it is still necessary to involve the CPU for efficient task management.

##### Share on other sites
I think the overall trend is quite clear:
- CPUs these days tend to draw more speed from the usage of multiple cores
- GPUs tend to extend their capabilities towards a more general usage

Now let this scenario play out for a hand full of years, and you'll eventually reach a point where CPU == GPU.

Parallel computing seems to be the way not only graphics has taken, which is why I am more or less certain that we're about to see a consolidation happening well within our lifetime. For this to work out properly, software - especially OSes, will have to endure drastic changes under the hood, but I still see this coming.

Ultimately, I have the vision of computers that are capable of performing actual proper RT in realtime along with the underlying application logic without any specialized additional hardware needed. Only the multicore-XPU they come with.
I consider today to be a transitional state. The whole GPU aera actually, which we are well within today. Well, I hope for all this at least, but considering today's indications, I don't feel too far off :). Stuff like OpenCL or CUDA is just the next larger step - I expect some of their parallelism oriented features to become standards in general programming languages, too.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628293
• Total Posts
2981868

• 11
• 10
• 10
• 11
• 17