• Advertisement
Sign in to follow this  

Why isn't ray tracing required when rendering any object?

This topic is 462 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

How have I been able to render images with OpenGL without using the ray tracing method. I've been able to render cubes and cubes with lighting without using the ray tracing implementation.What cases is ray tracing used that would make it apparent? From my understanding ray tracing is a rendering method for generating computer generated images on the image plane. It is used to generate illumination on the object based on whether or not the light ray has intersected with the object.

 

Apparently ray tracing is not the only rendering algorithm out there as there are different alternatives. What is meant by "rendering algorithm" is my point? Doesn't OpenGL and Direct3D allow the programmer to render objects without needing to use a rendering algorithm to render objects on screen.

Share this post


Link to post
Share on other sites
Advertisement

Common graphics API's use rasterization to get triangle on the screen.  The lighting you've been able to accomplish is called direct lighting which is pretty easy to implement.  Shadows are separate problems for rasterization whereas with raytracing the shadow is figured out because no light ray hits that portion of the object within some number of bounces.  There's also pathtracing which I always forget the difference between it and raytracing so look it up.  It's been a long time since I thought about raytracing so I might not be completely accurate but if I were you I'd look up rasterization, raytracing and pathtracing.

Share this post


Link to post
Share on other sites

When raytracing you shoot a ray from eye through each pixel of the image plane and search for the closest intersection of all triangles in the scene.

You can calculate the normal and texture UV from the hit triangle and shade it.

 

Rasterization loops over all triangles in the scene, projects the vertices to the image plane, sorts triangle edges top down, draws a scanline from the left edge to the right edge.

Normal, Z and texture UV can be interpolated while drawing the scanline pixel for pixel. (Modern way is more likely using pixel quads instead scanlines)

To find the closest triangle you can either sort the triangles back to front or use a Z buffer to overwrite far pixels.

 

Another, less known method is Splatting: Here the scene is defined by many points and each of them is simply transformed to image space and drawn, probably using some LOD hirarchy and using Zbuffer again.

Each Point has color and normal, no need for textures.

 

 

You can combine all of those techniques together if you want.

Although rasterization is the standart way on GPUs, the other methods can be implemented efficiently with compute shaders.

Raytracing is already used a lot in current games for approx. reflections and accurate shadows.

Edited by JoeJ

Share this post


Link to post
Share on other sites

So by default OpenGL uses rasterization to render objects. So I'm guessing the rasterization algorithm is written in a lower level language by GPU manufactures. So when I write a ray tracer, and run it, does the GPU process the ray tracing algorithm and override the rasterization algorithm?

Share this post


Link to post
Share on other sites
So by default OpenGL uses rasterization to render objects. So I'm guessing the rasterization algorithm is written in a lower level language by GPU manufactures. So when I write a ray tracer, and run it, does the GPU process the ray tracing algorithm and override the rasterization algorithm?

Rasterization occurs in hardware.  If you want to write a ray tracer to run on the GPU you'd have to do it explicitly using compute shaders. 

 

edit - by occurs in hardware I mean there is dedicated hardware specific to rasterization.

Edited by Infinisearch

Share this post


Link to post
Share on other sites

So by default OpenGL uses rasterization to render objects.


OpenGL by itself _only_ uses rasterization. Primitives and pixel shaders all work in the context of rasterized pixels*. The GPU never stores off your scene information and never does any ray tracing or lighting bouncing or anything. The same is true for Direct3D (all versions), Vulkan, Metal, and all the consoles' proprietary APIs.

Raytracers that run on the GPU are often written in a language like CUDA and might not even use OpenGL/D3D at all.


* yes, pixel shaders can implement a raytracer. Technically, though, to get a pixel shader to run over a particular pixel, you have to raster a triangle over it. :) Also there are compute shaders in OpenGL/D3D that you can use for writing a raytracer, though that is more often only done in the context of "partial" raytracers that are used to assist lighting calculations for the main rasterization pipeline.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement