Global Illum/Raytracing Intersections

This topic is 4357 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Right. So I'm writing a 3D GI renderer for my term project. I'm at the point where I'm wondering what is the most practical and efficient approach. I have a base class for all my 3D objects called CSceneObject. All "renderable" objects (polygons, spheres, cylinders, etc) inherit from this. I'm actually wondering now, what is the most practical way to go about things when doing ray intersections. The way I typically did things was to have a RayIntersect() method of the form: bool CSceneObject::RayIntersect(const CRay& Ray, CVector3& Point, float& Distance); So this takes the ray as input, returns a boolean to know if the intersection occured, and also returns (by reference) the intersection point and its distance from the ray's origin. This is essentially intersection information. What I have been done when rendering is have another separate function to give me the "point attributes". Basically, a method to give me the normal, texture coordinates and other surface properties given a surface point. But I'm about to implement a shader system and I wonder what makes more sense... To have some sort of intersection result struct where I just compute everything on an intersection test, possibly reusing some data (for nurbs maybe?) or to keep things separate. It makes some sense to keep it separate since if you want to test to see if the path is clear (rather than actually tracing a ray to get some visual information), you don't really care about surface properties. I'm also not sure how to handle shaders. I'm thinking of (initially) assigning one shader per object, and passing the shader the basic surface information to compute the shaded color getting out of the surface (what's the physical name? radiance?). I guess something like, passing the shader the incoming ray direction, local position, local normal, local U, V coords, the assigned light-emissive value of the surface, etc... (need some kind of way of distinguishing lights eh?). I was also thinking of having a method to generate surface points from U, V coords going from 0 to 1. The main intent is to be able to generate random points for direct illumination (sample light sources) while partitioning the sample space.

Share on other sites
You basically just need two intersection functions (perhaps overloaded). One that just takes a ray and tells you whether or not it hit the primitive (for shadows), and one that returns by reference a Hit structure.

In my renderer the hit structure just containts the hit position, triangle id and barycentric coordinates. Normals etc are interpolated by the shading system just before passing off to the shader.

The shader itself is passed a State object that contains information pertinent to shading: the Hit information together with local surface info (normal, tangent etc) and ray info. It also contains the list of lights for illumination.

How you pass pertinent parameters to the shader (colour etc) is up to you, but a flexible system wouldn't rely on having a "one size fits all" shader attached to everything

Share on other sites
I would make two functions as well: not only with the 'no hit info' version you will avoid the creation of the hit point state, but you can also optimize the algorithm for the two cases (without the need of an if statement that would slow down the algorithm).

Share on other sites
Fair enough, but I don't feel that the normal can be interpolated that easily outside of the object. It works for polygons, but not so well with spheres, etc...

SRayIntersection
{
CVector3 Point;
CVector3 Normal;
CVector2 Coordinates;
float Distance;
// Tangents?
};

bool CSceneObject::RayIntersect(const CRay& Ray, SRayIntersection& Intersection) const
bool CSceneObject::RayIntersect(const CRay& Ray) const

How exactly do you compute the tangent, by the way? And are there other english words for intersection (as precise but shorter)?

And hrm, even if not "hit information" is computed, wouldn't the intersection distance still be required for things like checking that the path between a light and an object is clear? Otherwise you don't really know if the intersection occurs between the light and the object or simply behind the light.

Share on other sites
I would be inclined to create a single structure that both supplies input to the trace query and stores the output. Something along the lines of:

struct sTraceQuery{   /* Input */   CRay queryRay;   int queryGeneration;   // More input if you need it      /* Output */   float resultDistance;   CVector3 resultTBN[3];   CVector2 resultUV;   sMaterialHandle resultMaterial;   // More output if you need it};

This provides a clean, convenient way for you to store the entire query, rather than store the input and output separately and in pieces. Then you might have functions like this:

// Perform a full trace (return surface information)
void CSceneObject::RayTraceFull(sTraceQuery& Query) const
// Perform only an occlusion trace (just return distance)
void CSceneObject::RayTraceOcclusion(sTraceQuery& Query) const

The output resultDistance is a value between 0 and 1 (inclusive) that specifies the fraction of the ray that was traced. Thus the intersection point can be calculated if needed, and the ray is occluded if the fractional distance is less than 1. queryGeneration is something that will help out when you implement reflective surfaces, because by increasing the generation every time you reflect a ray you can limit the recursion depth.

Quote:
 Original post by Max_PayneI'm also not sure how to handle shaders. I'm thinking of (initially) assigning one shader per object, and passing the shader the basic surface information to compute the shaded color getting out of the surface (what's the physical name? radiance?). I guess something like, passing the shader the incoming ray direction, local position, local normal, local U, V coords, the assigned light-emissive value of the surface, etc... (need some kind of way of distinguishing lights eh?).

You might consider implementing a simple materials system on a per surface basis (my example structure alluded to such a system being in place). A "material" would not only included the visual texture information, but other relevant surface properties like transparency and reflectiveness, as well as the shader that should be used to render it. So instead of the ray tracer dealing with objects (which it has no knowledge of), have it deal with surfaces since that's the information it's getting from a trace. Then you will have everything you need to render (in addition to the regular shader input).

Share on other sites
Well Zipster, this isn't a very elegant solution... You're using in/out parameters like a C programmer would.

As for the two functions, I'm beggining to have doubts. If I want to sample a light source for direct illumination by generating random surface points from uniform U, V coordinates, I'm actually going to need some kind of function to get some information about that point anyways.... So to some extent, I might as well have a method to generate all the information about a point separately...

Share on other sites
Just because there's an in/out parameter doesn't mean the solution isn't elegant. If you want you can split it into separate input/output structures, but the point is that you have a single object that contains a ray query. It's a collection of related information that should be kept together, even if some of that information happens to be input to a function that generates the rest of the data. The ray itself provides crucial context for the surface properties, as well as for the distance. It's even a good, concise structure to pass to the shader, since it contains all the information you need for rendering. Otherwise you'd have to pass the ray as one input, and then pass the surface properties as another input or have the shader be responsible for re-querying the surface properties but that's a bit silly if you've already done it once.

So I think it's a really good idea to keep everything in a single structure even if a function uses some data to fill in the rest, but you can do whatever makes you feel most comfortable. I'm not a huge fan of the in/out parameter either but the other benefits outweigh any idiomatic faults.

Share on other sites
Quote:
 Original post by Max_PayneAs for the two functions, I'm beggining to have doubts. If I want to sample a light source for direct illumination by generating random surface points from uniform U, V coordinates, I'm actually going to need some kind of function to get some information about that point anyways.... So to some extent, I might as well have a method to generate all the information about a point separately...

I don't quite see what it it you're trying to achieve here.

To perform lighting, you should generate samples on the light source (assuming you're talking about area lights). At this point you already know all the information you need to evaluate the light's power function for each sample and hence calculate the radiance to the surface point. Before you do that however, you want to make sure that the point you've sampled on the light source is actually visible from your surface point, so you just trace a ray between the two, and if you hit anything at all in between (assuming simple shadowing), you don't need to bother doing any more work.

As cignox1 said, the simple "do i hit anything" test can be far more efficient than one that generates hit point information. You can drop out as soon as you know that you've hit something. For full hit point info, you need to calculate the closest point.

How exactly you structure the generation of normals etc for the different surfaces you want to support is up to you. Tangents are easy for parameteric surfaces. For poly meshes you can arbitrariliy generate them according to face edges, but it's better to have the user provide them.