Radiosity on curved surfaces?

Started by
11 comments, last by Yann L 15 years, 8 months ago
Advertisement
Gammastrahler
Author
150
December 24, 2005 09:28 AM
My current radiosity implementation is very basic, just generating lightmaps from patches and then display them in my indoor engine. This works fine, but if i have objects like cylinders, i get faceted lighting since the normals are only per surface. How can i modify the radiosity processor to get a more smooth lighting?
timw
598
December 24, 2005 11:03 PM
to me it's sounding more like an interpolation problem. how are you interpolating colors? each face a vertex is connected to should have some pull on the vertex's radiosity.
Yann L
1,806
December 28, 2005 08:11 PM
Yes, from your question it sounds like a rendering problem. Are you using a bilinear filter on the lightmap textures ?

In case you meant actual curved surface elements (eg. NURBS or Bezier patches), then yes it is surely possible. The definition of the differential area isn't limited to flat patches. That's only a commonly used representation, since it allows for good optimizations and is comparatively easy to implement. Just keep in mind that the evaluation of the form factor can become very hard with analytic curved differential areas. Unless you tesselate these surfaces into planar elements and integrate afterwards - but then you'd basically be back to planar patch radiosity.
krulspeld
January 02, 2006 12:06 PM
Bilinear filtering only works if the neighbouring pixels in the lightmap actually contain the values of the neighbouring patches. For a curved surface like a cylinder the texture parametrization will typically split up the surface into small groups of triangles planary laid out on the packed lightmap. This will introduce seams. If you are using a simple grow filter to support bilinear filtering the filtering does not work properly on split up surfaces, since it does not really sample the neighbouring patches used during the radiosity calculation.

So either try to generate a texture parametrization that does not introduce seams (hard) or add extra borders to the lightmap not with a grow filter, but by looking up values of neighbouring patches as used in the radiosity calculation.
Yann L
1,806
January 02, 2006 01:00 PM
Quote:Original post by krulspeld
Bilinear filtering only works if the neighbouring pixels in the lightmap actually contain the values of the neighbouring patches. For a curved surface like a cylinder the texture parametrization will typically split up the surface into small groups of triangles planary laid out on the packed lightmap.

It is very important to have a good texcoord parametrization function for lightmapped radiosity. Per triangle maps are obviously the worst possible scenario, but even the typically used edge collection growth approach is not the best solution. The paper Bounded-distortion Piecewise Mesh Parameterization gives some nice ideas about better ways to do this.

Also, the parametrization alone is not enough to give seamless surfaces. You need to add edge borders, and a way to adjust these border texels in such a way, that the resulting bilinear function will be as smooth as possible. This is not always possible, depending on how the UV coords on each side of an edge are orientated to each other. But such a seam removal technique can give tremendeous quality improvements over non-adjusted cases.

I have done extensive work on GI multiresolution seam removal in the past, and could elaborate a bit more on technical details if there is any interest.
krulspeld
January 02, 2006 03:29 PM
Quote:Original post by Yann L
I have done extensive work on GI multiresolution seam removal in the past, and could elaborate a bit more on technical details if there is any interest.


I would be interested in this. Besides that I like to know how to handle distortion introduced by the parametrization in the radiosity calculation. Not only do you want to have isotropic texels in your lightmap, but also of predictable area. Maybe this requires a different thread.

With respect to radiosity on curved surfaces a problem I came across with the radiosity engine I am using is that subelements that are used for gathering all use the same normal as the parent patch. By using interpolated normals for the subelements the form factor calculation would be more accurate.
Yann L
1,806
January 02, 2006 08:47 PM
Quote:Original post by krulspeld
I would be interested in this. Besides that I like to know how to handle distortion introduced by the parametrization in the radiosity calculation. Not only do you want to have isotropic texels in your lightmap, but also of predictable area. Maybe this requires a different thread.

I think this is somewhat related to the original question, but if you would like to discuss it more in depth (could be interesting), I suggest creating a new thread. Just thst much: any non-trivial parametrization is bound to have distortions. The idea is to limit them by using an error metric. Of course, reducing local distortion will increase edges with discontinuities in the parametrization, which can introduce seams. So it's a tradeoff.

The area of patches will not always be the same, and this has to be taken into account for accurate radiosity solutions. In my solver, I measure the worldspace area of each differential area patch, and include it into the energy transfer equation through Ai or Aj (depending on whether you shoot or gather). This works very well, as long as the area differences between two differential areas is not too large. The area needs to be measured anyway in order to deal with partial patches, such as those on UV edges.

Then, there is another important point to note when dealing with edges. Usually, the lighting equation is sampled at the center of a patch, assuming the patch to be infinitely small, yet completely isotropic. When a distortion occurs, or if the patch is cut by an edge, the midpoint doesn't represent to patch correctly anymore. In this case, it needs to be repositioned, and the equations adjusted.

Quote:Original post by krulspeld
With respect to radiosity on curved surfaces a problem I came across with the radiosity engine I am using is that subelements that are used for gathering all use the same normal as the parent patch. By using interpolated normals for the subelements the form factor calculation would be more accurate.

Definitely. Using single normals for subpatches will seriously degrade the accuracy of the FF. Moreover, It will create banding artifacts that are 'hardcoded' into the lightmap, and cannot be filtered away by the hardware afterwards.

I personally advise against gathering on subpatches with lightmapped radiosity, unless they're only used for shadow subsampling. In this case, the FF is only evaluated at a single point, except for the Hij term which is sampled multiple times. This avoids the normal issue entirely. It doesn't make much sense to use 'real' subpatches in a lightmap scenario, since you're basically sampling above the Nyquist limit.

Of course you can (and should !) still use hierarchical methods for the shooting patches in order to accelerate the solution generation.

About the seam removal, I'm currently using an iterative 3 pass system. It removes pretty much all visible seams, even if the resolution and orientation of the adjacent lightmaps are very different from each other. The system uses two dimensional curve matching to compute the values of edge texels, so that the resulting bilinear function is as smooth as possible. It's a little late today, but I'll post some screenshots and explanations tomorrow.
krulspeld
January 04, 2006 01:43 PM
Quote:Original post by Yann L
About the seam removal, I'm currently using an iterative 3 pass system. It removes pretty much all visible seams, even if the resolution and orientation of the adjacent lightmaps are very different from each other. The system uses two dimensional curve matching to compute the values of edge texels, so that the resulting bilinear function is as smooth as possible. It's a little late today, but I'll post some screenshots and explanations tomorrow.


I am still interested in your seam removal system. Seam removal is not only important for correct radiosity solutions, but for all per texel calculations like for instance ambient occlusion or normal maps.
Yann L
1,806
January 05, 2006 12:21 AM
Okay, so here's the explanation of my seam removal system. As already mentioned, three individual passes are used to removed edge seams, two analytic and an iterative one. Since the whole system is pretty complex, I will first explain pass 1 here, and extend on passes 2 + 3 if there is enough interest. I might put this post into a small article later on.

First of all, I will quickly outline what the edge seam problem actually is. The problem arises when applying surface maps to a mesh (the so called texture parametrization) in order to approximate continuous functions. Lightmaps are the most common example, but the concept (and the seam problem !) really extends to pretty much any function that is represented by a set of surface maps: world space normal maps, SH maps, special effect colour textures, etc.

Since an image says more than a thousand words, let's look at an example. Here is a screenshot from a modified version of the famous Cornell box. A progressive refinement radiosity solver was used to compute the global illumination. The HDR lighting data of the solution is stored in 32bit RGBE lightmaps (tonemapped in a pixelshader), that are applied to the geometry using a simple normal driven box projection parametrization. The sphere in the middle of the Cornell box is a perfect geosphere. Due to the box projection UV generation, 6 lightmaps are needed to cover the entire sphere with light samples. No texture filter was used in this image:



As you can see, the lightmap resolution is very low. This was done on purpose, so to better demonstrate the seam effect. It is directly apparent from the image, that the lightmaps the cover the sphere don't have matching texels at their geometric connection edges. In fact, the top lightmaps' texels have a completely different orientation as the front lightmap. Furthermore, there are small texel distortions due to the non uniform parametrization.

If we apply a bilinear filter on these lightmaps, we get this result:



The seams between the lightmaps are clearly visible, and this is a very unsatisfactory result. Increasing the lightmap resolution will alleviate the problem, at the expense of significant computational power and increased storage requirements. This is not always desirable.

But the lightmaps can be modified in such a way, that the seams disappear without changing the map resolution or UV coordinates. The following screenshot shows the result of the seam removal algorithm when applied onto our test scene:



This is already much better. There are still small inconsistencies visible at the edges, but even those could be removed by increasing the iterative precision of the removal system (at the expense of a small increase of processing time, but without requiring more storage space).

This algorithm comes with the following prerequisites:

* You should have a working surfacemap generator, and a working parametrization (basically, your engine should be at the level of the second picture).
* Your surface maps should have a 'guard band area', basically a one pixel border added around the actual data.
* You should know how the GPU performs bilinear interpolation internally, and the equations behind it.
* You need a way to identify discontinuity edges, ie. edges of your mesh that are shared by two or more lightmaps.

And finally, you should have a way to compute or interpolate lighting information along these edges from your existing GI dataset or lighting equations. This is due to the way the algorithm works. It corrects the lightmaps according to reference lighting data supplied along the discontinuity edges.

OK, let's get started.

The subtexel geometry

Let's first review what a lightmap texel looks like on a subtexel level:



The blue center quad T is the texel that is currently under consideration. Its direct neighbours are coloured in red, and called Tt, Tb, Tl, Tr (for top, bottom, left, and right respectively). Let's call the four grey lines connecting the center of T with each direct neighbour texel direct influence paths. They are something special, as we will see further below. The dashed rectangle symbolizes the influence area, ie. the area that will be modified if the colour of T changes. Let's call every other possible path within this influence area, that is not one of the four grey paths, an indirect influence path.

Let's consider an arbitrary point somewhere within the influence area of T. If this point lies on one on the four direct influence paths, it's value can be accurately evaluated by using the colours of only two texels: T, and one of either Tt, Tb, Tl, or Tr, depending on which path it is located. If the point doesn't lie on a direct influence path, its position can be categorized into one of four possible quadrants (Q1 to Q4):



In this case, its colour is influenced by four different texels: T (blue), two direct neighbours (red), and an indirect neighbour (green).

Analytic linear matching

As briefly mentioned above, points lying on one of the four direct influence paths will only need two texels (instead of four) to be accurately defined. This allows us to strip a dimension away from the bilinear filter equation, essentially transforming it into a linear one in the form of:

P = A + s * (B - A)

This equation can be easily reordered to solve for either A (if s, P, and B are known), or for B (if s, P, and A are known). So, if we know the colour of a point on a direct influence path, it's position and the colour of the neighbouring texel, we can compute the colour of T. This works with all four grey paths.

Remember that we know the exact colour values along all discontinuity edges in our scene, probably interpolated along the edge using a certain number of regular sampling points along the edge. Now, let's consider what happens if a polygon edge cuts through the influence area:



As we can see, the edge E intersects two direct influence paths: T->Tr and T->Tb, at the points P and Q respectively. We know the exact colour values at P and Q (from the edge), and can therefore derive the value of T using P, Q, Tr and Tb. Since the edge intersected two direct influence paths, we get two results that might differ slightly depending on your lighting solution. We simply take the average of both colours here. We can repeat this process with all discontinuity edges that intersect the influence area of T. We end up with an average colour that we can assign to T.

Note that these calculations assume we know the value of Tr and Tb. Depending on the lightmap, we might not, because Tr, Tb (or both) might lie outside of the defined polygon region. This usually happens on very thin geometry, that is narrower than the size of a single texel. In this case, we ignore the sample point, since we cannot derive the colour of T if we don't know the neighbour texel colour.

Sample reliability

Consider the points P and Q from the example above. Both will be influenced by the colour of T. But P will be much more than Q, because P is nearer to T. Thus, the colour sample taken at P contains more relevant information about the colour of T, than the one taken at Q. Or in other words, P has a higher reliability than Q. When taking the average of the interpolated T colours as explained above, we weight each colour by the reliability of the sample point it was evaluated from. Note that this will lead to an interesting behaviour: Samples taken by points that lie directly on the border of the influence area will have a reliability of zero. This is normal, and they can be discarded to speed up the calculation.

Using indirect influence paths

OK, so far so good. The algorithm above only works if the discontinuity edges intersect at least one direct influence path. What happens if they don't ? Well, then we get an empty sample point set, and cannot evaluate the colour of T. Not good. Consider this situation:



No edge of the purple polygon intersects a direct influence path, allthough the point A (for example) will definitely be influenced by the colour of T. So the edges still contain information about T, even if they are on an indirect influence path - only that this information is more difficult to extract. Now we're working an a two-dimensional bilinear equation instead of the single dimensional linear one we used above. This means that in order to use the value of A, we need to know the texel colours of all texels belonging to the quadrant A happens to be in. A is in quadrant Q3, so we need to know the colour of Tr, Tb, and the indirect neighbour Trb (in green) in order to compute the colour of T:



We can decompose the two dimensional equation into two simple one dimesional ones. We first compute P and Q by interpolating the colours from Tr, Tb and Trb. Then, we can evaluate P' and Q' by using A and P or Q respectively (again taking reliabilities into account here !). Finally, we have two samples P' and Q', each one lying on a direct influence path. We can then simply plug both samples into the equations we used for direct paths above, yielding two additional sample points evaluating T.

This concludes pass 1. This pass will already remove a lot of the smaller seams and make larger ones much less noticeable. But there are still situations, where texels cannot be evaluated, because not enough neighbouring texel colours are known. For these cases we have pass 2, which tries to evaluate T's colour using an iterative approximation approach, similar to a genetic algorithm.

[Pass2 and 3 will follow only if there is enough interest, because they're much more complex]
timw
598
January 05, 2006 04:27 AM
real clever. I'm definitly interested in 2 and 3.

//one thing I'm confused.

I thought the original question was about straight texture mapping of light values. not enviorment/projective texturing, guess I read it wrong.


[Edited by - timw on January 5, 2006 4:27:48 AM]
Share:

This topic is closed to new replies.

Advertisement