Archived

This topic is now archived and is closed to further replies.

normalization cubemap size

This topic is 5507 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi! I was wondering what kind of size do you use for your normalization cubemaps. I used 128x128 all the time(for perpixel lighting/bumpmapping) until I saw some realy wierd banding efect on some models. After increasing size to 256x256 this was a bit less visible. (I can post a screenshot if you want.) What kind of size do you use to get the best performance/quality ratio? You should never let your fears become the boundaries of your dreams.

Share this post


Link to post
Share on other sites
Actually, 128*128 is more than enough most of the time (you can even go lower, most bumpmapping demos from nVidia use 32*32 maps). Very important: turn off any filtering on your cubemaps ! Use nearest pixel lookup, as linear filtering will denormalize the cubemap values.

Share this post


Link to post
Share on other sites
Switching from linear to nearest pixel lookup didn''t change a thing. I guess there is another flaw in my algorithm. Here''s a picture of problem :
http://www2.arnes.si/~uteran/scene0.jpg



You should never let your fears become the boundaries of your dreams.

Share this post


Link to post
Share on other sites
Hard to say without seeing your algorithm. Is that a diffuse or specular highlight on the board ?

It''s not impossible that the cubemap might be the cause, it depends on the geometry and your algorithm, again hard to say from a shot. Is the checker board flat on the surface (ie. is the 3D entirely bump) or are the tiles curved ?

Especially important: how are you using the normalization map ?

Share this post


Link to post
Share on other sites
It''s a specular highlight. The geometry for the board is just one big box. Tiles are just on the bumpmap.

"how are you using the normalization map ?" I realy don''t understand what you want me to tell you?


Algorithm looks something like this.
In vertex program:
-calculate L (point->light)
-calculate E (point->eye)
-calculate H = L+E
-transform H to texture space by TBN matrix
-send H to texture with normalization cubemap binded
-do some other sutff (for diffuse part, distance attenuation,..)

Register combiners setup: (just specular part)
-spare0 = H from normalizatiom cubemap DOT normal from normal map
-power up (H.N) to (H.N)^16 [ I tried using both, normal method wih a few GCS or fake one with x = 4 * clamp01(x-0.75) and the result is almost the same for both. ]
-multiply with light color and so on....


A bit of side-question. How do you handle attenuation for specular part? It looks wierd if you ignore it, but even wierder if you use the same attenuation function as for diffuse part.


(The funny thing is that I''ve writen this thing over a month ago and never noticed this bug, until a few days ago I tried this "shader" on a big flat object. It wored "just fine" on curved ones.)

You should never let your fears become the boundaries of your dreams.

Share this post


Link to post
Share on other sites
OK, I think I found what the problem is. In fake pawer-up I use something like this:

x = 4 * Clamp_In_0_TO_1( (H dot N) - 0.75 )

and that 4* eats 2 bits of already low 8 bit value. Situation in real (H dot N)^16 is not much better.

Any idea how to slove this?


You should never let your fears become the boundaries of your dreams.

Share this post


Link to post
Share on other sites