• Advertisement
Sign in to follow this  

How you create an image using Ray Tracing

This topic is 1403 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

For example, you can create a Cornell Box image through pure code, no visual modellers are used(as far as I can see), I understand there is a bit of Math behind this, the rendering equation has to be solved using methods like the Monte-Carlo method.

Then you have a model where you firing so many rays of light in general, some of these rays will reflect/absorb or refract etc and you calculate ray will interact with which object

What I don't see however is how a picture is created using raw code. Could somebody enlighten me?

Share this post


Link to post
Share on other sites
Advertisement
Can you imagine how the color of an individual pixel can be computed using "raw code" (whatever you think that is)? If that's the case, you can do that for every pixel, put the colors in an array and then use an image library (like libpng) to write out a file in a standard format.

No offense, but maybe you should post in "For Beginners" for now...

Share this post


Link to post
Share on other sites


What I don't see however is how a picture is created using raw code. Could somebody enlighten me?

 

A) you don't understand how a rendering is accomplished (at all) using code

B) you don't understand how a scene/model can be described using code

C) you don't understand how something is drawn to the screen, or an image file

D) none of the above.

Share this post


Link to post
Share on other sites

I don't really understand your question, but let me try a quick overview step by step.

 

An image is a rectangle filled with pixels. You can assign colors to pixels however you like. Eg: white to pixels whose Y coordinate is bigger than a value, black for the others. I guess that's easy to understand.

 

Now let's change the criteria. Say we have a 2D circle mathematically defined and we assign white to pixels that are inside and black to pixels that are outside (easy to test via the equation of a circle). Easy, right? We have the image of a circle.

 

The next step is steeper. Now we assume we have a 3D sphere defined mathematically and we give our pixels 3D coordinates (mathematically, we assume our pixels are in a 3D plane. Why not?). And our criteria now is: we shoot rays from a fixed 3D point and through the pixels, and if they intersect with the sphere we mark the pixel darker the further along the ray it intersects, or pure black if it doesn't intersect the sphere at all. You get something like this: http://www.geekshavefeelings.com/x/wp-content/uploads/2009/12/Diffuse-lighting-sphere.png .

 

Now, if instead of just marking the pixel darker the further the ray intersects, you compute how much light reaches that intersection point, you get something like: https://www.cl.cam.ac.uk/teaching/2000/AGraphHCI/AG/img/rtsph.gif.

 

And of course, by improving this criteria (using different lighting techniques, for example) and having more interesting models than just a 3D sphere, you can generate cool images. 

Edited by Javier Meseguer de Paz

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement