Archived

This topic is now archived and is closed to further replies.

Shadersystem for a raytracer

This topic is 5128 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''ve been thinking about replacing the way materials and textures are handled in my (near real-time) raytracer. Currently they are defined in material statements in the scene-file (ala PovRay), but I would like to change into something more like a programmable shader system, where materials are defined separately from the geometry. I''ve been looking into the RenderMan spec, but it seems quite complicated, not really well suited for real-time raytracing. The other option would be to do something like Q3-shaders, but they aren''t very ''programmable'' are they?. To me they''re just ''material statements'' stored in a separate file. Soo.. My plan is to create my own shading language, and incorporate that (yes! I DO code for fun, not primarily to use the end result..:-) This is were I''d like some good ideas... My problem is that I can''t quite decide what parts of the RT algorithm that should be kept in the engine, and what parts should be performed by the shader. The idea so far is to perform ray vs. object intersection test, then to load all useful data (such as eye-ray origin and direction etc) in a struct and send it to the shader. The shader should then return what? the color at the given point? This means that the shader should be able to perform shadow tests, recursive raytracing etc. It seems to me that this would make it quite a burden to implement shaders, unless I also export some utility functions for the shaders to use. As you might conclude from this post, I''m quite confused about how to do this, and IF I should do this. Any good ideas, references, links etc would be very welcome. thanx.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
You can take a look at the mental ray specs. Check the book references at http://www.mental.com/2_1_3_literature/index.html

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I have come across the same problem with my raytracer.
I use the following approach.

The shader base class exports a BRDF function
Color BRDF(Vec3D k1,Vec3D k2,Vec3D normal)
This function returns the amount of light reflected from k2 to k1 given the normal vector normal. k1 and k2 are typically eye and light vectors. The shader class has a bunch of other functions deciding the amount of reflection and refraction etc etc...

Then I use another class to do the actual raytracing and shadow rays etc...

The nice thing here is that I can come up with different integration functions (the actual trace function) to perform different kinds of raytracing. ie Whitted, path-tracing etc...
And I can use the same shader with all of them. So my shaders are totally separated from the actual tracing.

Cheers,

Share this post


Link to post
Share on other sites
I think a variation on the second AP's method would suit you the best; instead of defining shaders, define BRDFs, and just pass in the incident vector, surface normal, intersection point in world space, intersection point in object space, and surface-to-light vector. That gives you all you need to do texture mapping (since you have coordinates) and regular lighting. For best results, have a callback function in your raytracer that lets shaders trace a specific ray, so you can deal with reflection/refraction in your shader logic rather elegantly. Let shadows be done entirely by the raytracing engine though. Here's what I'm thinking:


Trace ray
Get Intersection Point
Determine which light sources are visible (shadow rays)
For each visible light source
Accumulate BRDF(incident, normal, worldIsectPt, objIsectPt, vector_to_light) * dropoff factor of light
Divide accumulated sum by number of visible lights
Return color


Your BRDF scripts (shaders) can then call recursive rays cleanly. The dropoff factor of a light source will vary; for a point light it will be Power of Light Source / ((Distance to Light Source) ^ 2), for example. For a spotlight/conelight you can easily determine if your point is inside or outside the lighting cone of the light source, and set the dropoff accordingly. For better speed (and support for soft shadows) you might rework the algorithm like so:


Trace ray
Get Intersection Point
For each light source
Light power coefficient = shadow density * dropoff
If Light power coefficient > minimum contribution threshold
Accumulate BRDF(incident, normal, worldIsectPt, objIsectPt, vector_to_light) * light power coefficient
Divide accumulated sum by number of visible lights
Return color



I'd experiment with both depending on your feature and speed requirements to see what works best.

Good luck!

[edited by - ApochPiQ on November 27, 2003 1:25:38 PM]

Share this post


Link to post
Share on other sites