Raytracing via shaders

Started by
8 comments, last by Shai 17 years ago
Sorry if this has already been discussed in detail here, I could not find such a post on it. I’m thinking of creating a real-time ray tracer project and was hoping to get a little input; make sure I have a sound plan before starting. This would be using SM 3.0. So here are the general ideas I have in mind. • Draw a single quad across the entire screen • Create the eye projected rays in the vertex shader; this would interpolated down to the pixel shader • The rest of the intersections, reflections, refractions and shadow feeling would be done on the pixel shader • Scene data would be passed in via fp textures (geometry and lights…maybe materials?) • Culling and other spatial information that can be used for optimizations would be done on the CPU and passed into the shader. • I will probably have to keep to simple shapes, intersecting a mesh of triangles will probably take me out of real-time speeds in a hurry Is there anything that I am overlooking? Any words of wisdom? Thanks for any feed back!
Advertisement
I've thought a lot about trying this type of idea out for the hell of it. I've done CPU based ray-tracing and shader based relief or parallax occlusion mapping and they're pretty easy to implement. Bumping up to a fully shader based ray-tracer sounds fairly straight forward to do. Of course like you said, your geometry winds up being limited and to get anything complex you'll have to make multiple passes.

I'd be interested in seeing how you make out with this.
Wow. Nice idea. Its a pitty about the limited complexity of the scene. I'd like to see how you get on.
Quote:Original post by skow
So here are the general ideas I have in mind.
• Draw a single quad across the entire screen

Correct, the fragment shader will be executed for every pixel.
Quote:
• Create the eye projected rays in the vertex shader; this would interpolated down to the pixel shader
• The rest of the intersections, reflections, refractions and shadow feeling would be done on the pixel shader

This will create some distortions. A better approach would be to store the rays into a texture, then calculate the intersection of each ray with the next object and generate a new ray (into an output texture) for the reflection/refraction. You may want to read the PhD thesis of Eric Veach about robust monte-carlo raytracing.
Quote:
• Scene data would be passed in via fp textures (geometry and lights…maybe materials?)

You can define procedural materials, then generate code for the fragment shader to calculate the material properties at a specific point in 3D.
Quote:
• Culling and other spatial information that can be used for optimizations would be done on the CPU and passed into the shader.
• I will probably have to keep to simple shapes, intersecting a mesh of triangles will probably take me out of real-time speeds in a hurry

That's what accelerating structures like octrees are for.
The state of the art (that I know of) on ray tracing on GPUs can be found at: http://graphics.stanford.edu/papers/i3dkdtree/ .

I would recommend reading that paper and its references, you can really raytrace mildly complex scenes in real time with a high-end GPU.
I never quite understood how one could do reflections in a shader-based raytracer. Imagine that you, the viewer, are located at (0, 0, 0). There's a reflecting metal sphere in front of you at (0, 0, 5). Behind you there's a diffuse sphere at (0, 0, -1).

Now how can you let the shader calculate the reflection of the diffuse sphere in the metal sphere?
"It's better to regret something you've done than to regret something you haven't done."
Quote:Original post by Shai
Now how can you let the shader calculate the reflection of the diffuse sphere in the metal sphere?


I don’t see how this would change with shaders. There is no real “you”, it’s just a point in space that has no geometry. The ray would bounce and hit the diffuse sphere behind, just as it would if it was to the right of the viewer. I don’t think I see what the problem would be that you are pointing out.
as I see it, the diffuse sphere is outside the viewing frustum and hasn't been rasterized... so I'm wondering how you're gonna detect a ray hitting it.
"It's better to regret something you've done than to regret something you haven't done."
The view frustum only guides the initial rays.

If one of those rays bounces and hits an object outside of the view frustum, you still calculate the hit as if the ray did not bounce. The only thing different is the direction of the ray when it hits. The method for calculating the pixel colour is the same.
I got it now :) I thought it over when I was on the train.
"It's better to regret something you've done than to regret something you haven't done."

This topic is closed to new replies.

Advertisement