Radiosity c++ tutorial

Started by
12 comments, last by Krohm 12 years, 4 months ago

You can bake per vertex, which makes things pretty simple. For textures you have to generate some sort of unique parameterization (UV mapping) of all meshes in the scene. Your sample points are then located at the center of each texel in the texture that you map to the scene. The DX9 D3DX library has a class that can generate the parametrization for you, although it's a little hard to use since the documentation sucks.

Why do you ask about a deferred rendering? If you pre-bake lighting, it's almost totally unrelated to how you handle your runtime lighting.


Vertex coloring was the first thing I thought of myself. A problem with it is that it won't look as good in large, flat surfaces where there aren't a lot of vertices. Maybe tessellation needs to be done?

However I find the texturing approach to be more appealing, as I would have finer control with it using pixel shaders. UV unwrapping via D3DX won't be possible, though, because I'm using XNA and that library or its UV Atlas functions aren't available.

I was planning to start by copying the models' UV coordinates straight through for the baked lightmap textures. Sometimes the textures wrap and that doesn't make sense for the lightmaps to do that, all texels must be unique for the lightmaps. So if some coordinates use wrapping and go beyond the bounds of 0 to 1, all of the coordinates for the baked texture are normalized.

Then based on the transformed scale of the object in the scene, and its original bounding box size, a texture size is chosen to fit the size, so that all lightmap texels look pretty uniform in size. The lightmap textures will still be noticeably coarser than material textures, to balance detail with speed in producing them. Does it seem sensible so far?

There still is the problem with models that reuse UV coordinates for material textures so without any more involved UV unwrapping methods I am not going to get around it other than breaking up the parts that reuse coordinates into separate objects in the scene. Still I think I'm thinking too hard about it.
Electronic Meteor - My experiences with XNA and game development
Advertisement

Vertex coloring was the first thing I thought of myself. A problem with it is that it won't look as good in large, flat surfaces where there aren't a lot of vertices. Maybe tessellation needs to be done?


Absolutely, that's the major problem with baking per-vertex: your lighting resolution is directly tied to how much you tessellate your meshes. So yeah you can add more verts, but then you end up adding more positions/normals/texture coordinates/tangents/etc. where you may not really need them. This is why lightmaps are nice, because their resolution is decoupled from your underlying geometry.


However I find the texturing approach to be more appealing, as I would have finer control with it using pixel shaders. UV unwrapping via D3DX won't be possible, though, because I'm using XNA and that library or its UV Atlas functions aren't available.


Well unwrapping is definitely something you'd want to do during content authoring or during your content processing phase. So you could certainly write a content processor that handles the mesh processing, perhaps using SlimDX or another managed wrapper. But if you don't want an automated process then you can just author the UV's as part of creating your models, as long as you either make sure that they're unique or you handle splitting up your lightmaps into the required number of separate textures.


I was planning to start by copying the models' UV coordinates straight through for the baked lightmap textures. Sometimes the textures wrap and that doesn't make sense for the lightmaps to do that, all texels must be unique for the lightmaps. So if some coordinates use wrapping and go beyond the bounds of 0 to 1, all of the coordinates for the baked texture are normalized.

Then based on the transformed scale of the object in the scene, and its original bounding box size, a texture size is chosen to fit the size, so that all lightmap texels look pretty uniform in size. The lightmap textures will still be noticeably coarser than material textures, to balance detail with speed in producing them. Does it seem sensible so far?

There still is the problem with models that reuse UV coordinates for material textures so without any more involved UV unwrapping methods I am not going to get around it other than breaking up the parts that reuse coordinates into separate objects in the scene. Still I think I'm thinking too hard about it.


Yeah that sounds like a fine way to start out. After you get that working, you can experiment with adjusting texture resolution based on a better metric of triangle surface area rather than just a bounding box.

Don't know if it is related, but I've implemented algorithm from Hugo Elias website some 7 years ago (unwrapping static meshes to light maps is evil!!!) through render-to-texture and even in that time an average ray tracer could beat it in speed, or was it that using GPU for the actual implementation is that slow?

Ah... that was a long time ago, I bet that most modern games doesn't use light maps (mostly) - I personally would think twice before implementing them (of course if it is not for learning purpose - then go for it), as you can do dynamic lighting on most PCs today (on some even dynamic GI, ... eh... good dynamic GI, not that SSAO trick that everyone overdo and then it looks very very bad :wink: ... not that it couldn't be nice, but well most people overdo SSAO a lot).




I think you're right about the SSAO, it needs a lot of fine tuning to get it looking right, and there are some examples I've seen where the effect looks out of place with how everything else is rendered. Having learnt about the process behind GI, it just seems like the "right" way to get such an effect with the lighting, and you get a lot more with it too.
Electronic Meteor - My experiences with XNA and game development
I'm with Krypt0n on this.
I think I've read somewhere in the UDK documentation that per-vertex baking is discouraged. But perhaps they were talking about ambient occlusion?

Previously "Krohm"

This topic is closed to new replies.

Advertisement