Jump to content
Posted 07 December 2011 - 05:24 AM
Posted 07 December 2011 - 07:19 AM
Posted 08 December 2011 - 04:20 PM
Posted 09 December 2011 - 04:53 AM
Posted 10 December 2011 - 03:17 PM
Posted 10 December 2011 - 05:55 PM
Posted 10 December 2011 - 07:30 PM
My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com
Posted 11 December 2011 - 01:57 AM
Posted 11 December 2011 - 03:30 PM
Ah... that was a long time ago, I bet that most modern games doesn't use light maps (mostly) - I personally would think twice before implementing them (of course if it is not for learning purpose - then go for it), as you can do dynamic lighting on most PCs today
Posted 12 December 2011 - 08:27 AM
Posted 13 December 2011 - 01:25 PM
You can bake per vertex, which makes things pretty simple. For textures you have to generate some sort of unique parameterization (UV mapping) of all meshes in the scene. Your sample points are then located at the center of each texel in the texture that you map to the scene. The DX9 D3DX library has a class that can generate the parametrization for you, although it's a little hard to use since the documentation sucks.
Why do you ask about a deferred rendering? If you pre-bake lighting, it's almost totally unrelated to how you handle your runtime lighting.
Posted 13 December 2011 - 01:43 PM
Vertex coloring was the first thing I thought of myself. A problem with it is that it won't look as good in large, flat surfaces where there aren't a lot of vertices. Maybe tessellation needs to be done?
However I find the texturing approach to be more appealing, as I would have finer control with it using pixel shaders. UV unwrapping via D3DX won't be possible, though, because I'm using XNA and that library or its UV Atlas functions aren't available.
I was planning to start by copying the models' UV coordinates straight through for the baked lightmap textures. Sometimes the textures wrap and that doesn't make sense for the lightmaps to do that, all texels must be unique for the lightmaps. So if some coordinates use wrapping and go beyond the bounds of 0 to 1, all of the coordinates for the baked texture are normalized.
Then based on the transformed scale of the object in the scene, and its original bounding box size, a texture size is chosen to fit the size, so that all lightmap texels look pretty uniform in size. The lightmap textures will still be noticeably coarser than material textures, to balance detail with speed in producing them. Does it seem sensible so far?
There still is the problem with models that reuse UV coordinates for material textures so without any more involved UV unwrapping methods I am not going to get around it other than breaking up the parts that reuse coordinates into separate objects in the scene. Still I think I'm thinking too hard about it.
Posted 14 December 2011 - 12:40 AM
Don't know if it is related, but I've implemented algorithm from Hugo Elias website some 7 years ago (unwrapping static meshes to light maps is evil!!!) through render-to-texture and even in that time an average ray tracer could beat it in speed, or was it that using GPU for the actual implementation is that slow?
Ah... that was a long time ago, I bet that most modern games doesn't use light maps (mostly) - I personally would think twice before implementing them (of course if it is not for learning purpose - then go for it), as you can do dynamic lighting on most PCs today (on some even dynamic GI, ... eh... good dynamic GI, not that SSAO trick that everyone overdo and then it looks very very bad ... not that it couldn't be nice, but well most people overdo SSAO a lot).
Posted 14 December 2011 - 01:00 AM