Archived

This topic is now archived and is closed to further replies.

Lightmap vs mipmap

This topic is 4941 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello guys: I have just finished my lightmap generator, everythings seems ok until I turned on mipmap filter. Color bleeding appears. When I pulled my camera far away from the lightmapped model, the edges of the triangles faded in.I have padded the lightmap texture edge with 1 pixel. How can I solve this problem? And why this problem would occur? Any ideas?

Share this post


Link to post
Share on other sites
Ok,let''s say you have a 64x64 lightmap.How do you generate the mipmaps?I''m assuming you lower the resolution and do some filtering with the initial mipmap(I don''t know what API you use,but that''s what gluBuild2DMipmaps does for OpenGL).
I think that will give you wrong results.You need to create the all the mipmaps indepedently and with the same method.That is,calculate the lightmap for 64x64,then calculate it for 32x32,16x16...

Share this post


Link to post
Share on other sites
quote:
Original post by mikeman
I think that will give you wrong results.You need to create the all the mipmaps indepedently and with the same method.That is,calculate the lightmap for 64x64,then calculate it for 32x32,16x16...


That would be a waste of resources. Downsampling the mipmap pyramid of each single lightmap is perfectly fine.

quote:

have just finished my lightmap generator, everythings seems ok until I turned on mipmap filter. Color bleeding appears. When I pulled my camera far away from the lightmapped model, the edges of the triangles faded in.I have padded the lightmap texture edge with 1 pixel. How can I solve this problem? And why this problem would occur?


I take it that you packed several lightmaps into a single texture ? In this case, a 1 pixel border is not enough for mipmapping. Keep in mind, that each mipmap level is half the resolution of the previous one. A one pixel border in the base map would be a 0.5 pixel border in level 1. Which, of course, cannot be represented on a discrete grid such as a texture. You will end up with an interpolated colour, somewhere between the border colour and whatever happens to be beside it (probably another lightmap). That's the colour bleeding effect you get. A 2 pixel border would be sufficient with level 0 and 1 mipmaps, but level 2 already requires a 4 px border, and so on.

Generally speaking, mipmapping is not advised with packed textures. If your lightmaps are not too highres, then simply turning off mipmapping can possibly do the trick. If not, you can try to play with the border thickness, and with the maximum mipmap level. And if you feel like dwelling into the realms of pixel shaders, you can get better results by clamping the texcoords to a subrectangle of the texture. Although that doesn't solve the problem entirely either.

Oh yes, and turn off anisotropic filtering on the lightmaps, in case you enabled it.


[edited by - Yann L on May 31, 2004 10:31:56 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by Yann L
That would be a waste of resources. Downsampling the mipmap pyramid of each single lightmap is perfectly fine.



How is that a waste of resources?The mipmaps occupy the same space in video memory no matter what method one uses to create them.Sure,it may take some extra time to calculate them,but that happens only once and off-line.I just think it''s better to use downsampling for texture maps and more advanced methods for other types of maps(lightmaps,heightmaps,normalmaps...)

Share this post


Link to post
Share on other sites
It is a waste of resources because you''re spending extra time (even if you only do it once) to generate something which will be equivalent to simply downsampling. You will not gain anything by recalculating it all, since you''ve already calculated it for a more dense sample.

It''s like if you''ve already calculated all of the square roots of numbers from 1-100 and then you need just the even numbers. It would be a waste to recalculate the sqrt''s for 2,4,6,... when instead you could just "resample" the ones that you already have. Granted not the best example, but I hope it gets the point across.

Share this post


Link to post
Share on other sites
If we want to go nitpicking, then recalculating each level is actually not only a waste of resources (time == precious resource, even offline - you'll notice that when you'll try GI solutions), but also a mathematically incorrect approach.

From digital signal theory, mipmapping is a set of precalculated filtered images, a discretized lowpass filter lookup table. It is used to approcimate the Nyquist curve with a few samples.That's exactly what the GPU will try to do, filling the gaps between samples with interpolation. For this to work as intended, it is important to supply the correct prefiltered set of maps, so that the Nyquist limit curve can be approximated adequately. Each mipmap is nothing but a filtered version of the previous one, with a lower maximal frequency cutoff (ie. a more narrow band, as the lower frequency cutoff is constant).

Recalculating each mipmap level is inaccurate, and doesn't match the filter coefficient needed for correct mipmapping. It can therefore introduce visual artifacts, in the form of aliasing. The mipmap technique is supposed to work on a set of prefiltered (ie. downsampled) maps.

Edit: in case the above was too theoretical, I'll put it in simple words: recalculating each mipmap level separately is mathematically equivalent to downsampling using a nearest texel lookup. That is, downsampling without any filtering at all.


[edited by - Yann L on May 31, 2004 11:39:20 AM]

Share this post


Link to post
Share on other sites
Ok,that seems to make sence.I apologize to the OP if I caused any trouble with an incorrect advice.
Yann L,you seem to be far more experienced than I am,so since you''re here,I want to ask you something relevant.What about normalmaps?Is downsampling OK with them too?I''m asking that because I create the mipmaps with gluBuild2DMipmaps and perform bumpmapping in a fragment shader.When I try to renormalize the normal obtained from the normalmap,the specular hightlight looks much better,but I have huge aliasing in distant objects.

Share this post


Link to post
Share on other sites
quote:
Original post by mikeman
What about normalmaps?Is downsampling OK with them too?


Normal maps are very problematic, because the mathematical model breaks down with them.

Consider a 2*2 pixel area on a level 0 map. It contains 4 normals, and will exhibit a complex lighting pattern which is a function of the light source direction, and all four normals. That is, the four normals over that tiny patch define a complex bidirectional reflectance distribution function (BRDF), which is unique for this normal combination.

Mipmapping replaces four normals with a single one. But there is no way to accurately reproduce a four parameter BRDF with a single parameter (normal). So the only answer to that question is: there is no correct solution. There are only approximations.

The mathematically correct approach would be as follows: at a higher mipmap level, compute the lighting solution as an integration over all normals in the level 0 map covered by the mipmapped patch (ie. using the accurate BRDF). Then, filter down the result to the appropriate mipmap level on the fly. Unfortunately, doing that in hardware is impossible right now.

You basically have two methods to filter normals maps: either simply average them as in normal mipmapping, followed by a renormalize. Or, downsample the original heightmap, and compute the normalmaps (through finite differencing) directly on the filtered heightmap. Both results will be mathematically incorrect, but can look visually OK. Which one looks better is personal preference and depends on the type of textures and bump maps you use. Try both, and keep the one you prefer.

Share this post


Link to post
Share on other sites
quote:
Original post by Yann L
quote:
Original post by mikeman
What about normalmaps?Is downsampling OK with them too?


Normal maps are very problematic, because the mathematical model breaks down with them.

Consider a 2*2 pixel area on a level 0 map. It contains 4 normals, and will exhibit a complex lighting pattern which is a function of the light source direction, and all four normals . That is, the four normals over that tiny patch define a complex bidirectional reflectance distribution function (BRDF), which is unique for this normal combination.

Mipmapping replaces four normals with a single one. But there is no way to accurately reproduce a four parameter BRDF with a single parameter (normal). So the only answer to that question is: there is no correct solution. There are only approximations.

The mathematically correct approach would be as follows: at a higher mipmap level, compute the lighting solution as an integration over all normals in the level 0 map covered by the mipmapped patch (ie. using the accurate BRDF). Then, filter down the result to the appropriate mipmap level on the fly. Unfortunately, doing that in hardware is impossible right now.

You basically have two methods to filter normals maps: either simply average them as in normal mipmapping, followed by a renormalize. Or, downsample the original heightmap, and compute the normalmaps (through finite differencing) directly on the filtered heightmap. Both results will be mathematically incorrect, but can look visually OK. Which one looks better is personal preference and depends on the type of textures and bump maps you use. Try both, and keep the one you prefer.



Thanks,you are really helpful.
Although I don''t see how I could generate a normalmap from a 1x1 heightmap.

Share this post


Link to post
Share on other sites
quote:
Original post by mikeman
Although I don''t see how I could generate a normalmap from a 1x1 heightmap.


Easy: it''s (0,0,1). At the size of one pixel, one can reasonably assume that the screenspace projection of the associated face is small enough, so that its normal can be approximated by the weighted average of the vertex normals. To make that compatible with the tangent space coordinate frame, one needs a normal that is approximately parallel to the averaged vertex normals in tangent space. And this is the perpendicular surface vector (0,0,1).

It''s like averaging a colour texture map to a single constant colour in the distance (basically what an 1x1 colour mipmap level does).

Share this post


Link to post
Share on other sites