Jump to content
  • Advertisement
Sign in to follow this  
dreddlox

Big optimization idea in bumpmapping pixel shader

This topic is 4673 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

OK, this just came to me(the recent gamasutra article about texture compression sparked this idea). Proposal: Instead of storing a texture + normal map in RGB + XY with calculated Z form, I believe it would be more efficient to store it in RG + XYZ with calculated B form. Explaination: The process of deriving Z from X and Y on a normal map is a fairly complex one. If Z was stored with the normal map, it would mean that the normal map's length component could be adjusted to increase or decrease the brightness of that texel. If the color component is "normalized" so that R+G+B=1 and the XYZ normal was shortened/elongated to compensate, it is possible to remove the B component from the color and calculate it through the equation: B=1-R-G. The math behind it: Traditional method pixel shader code(no preprocessing): XY = XY Z = sqrt(1 - X*X - Y*Y) RGB = RGB * dot(XYZ, lightXYZ) My method: Preprocessing: XYZ = XYZ * (R+G+B) / 3 RG = RG / (R+G+B) Pixel shader code: XYZ = XYZ * 3 RG = RG * dot(XYZ, lightXYZ) B = (1-R-G) * dot(XYZ, lightXYZ) As you can see, this eliminates a sqrt cycle and untangles a mul in the pixel shader. Has anyone thought of this before? Has anyone ever seen an implementation of it? Edit: My math was funky, I fixed it [Edited by - dreddlox on January 2, 2006 5:10:11 AM]

Share this post


Link to post
Share on other sites
Advertisement
What about the other 99.9% of cases in which the diffuse/colour texture isn't pseudo-normalized, such that R+G+B=1?

Share this post


Link to post
Share on other sites
Quote:
Original post by Cypher19
What about the other 99.9% of cases in which the diffuse/colour texture isn't pseudo-normalized, such that R+G+B=1?


The color texture is pseudo-normalized in preprocessing(ie. non-runtime). When the length of the normal is adjusted, the texel effectively becomes brighter or dimmer to compensate for the pseudo-normalization.
This is basically an adaptation of the YUV color of video codecs(Y=lumi, UV = chroma).

Have an example:
Start with the color RGB={0.5, 1.0, 0.5} and the normal XYZ{sqrt(0.5},sqrt(0.5},0}, the transformed light vector of {1,0,0}
Do my preprocess math to get the new values: RG = {0.25,0.5}, XYZ = {sqrt(0.5}*2/3,sqrt(0.5}*2/3,0}
In the pixel shader:
Calculate that RGB' = {0.25,0.5,0.25}.
Multiply through lighting equation to get pixel value: RGB = (sqrt(0.5}/2,sqrt(0.5},sqrt(0.5}/2}

Compared to the traditional method:
Preprocess to get RGB={0.5,1.0,0.5}, XY = {sqrt(0.5),sqrt(0.5)}
In the pixel shader:
Calculate XYZ = {sqrt(0.5},sqrt(0.5},0}
Multiply through lighting equation to get pixel value: RGB = (sqrt(0.5}/2,sqrt(0.5},sqrt(0.5}/2}

[size=-1]Edit: I'm tired and not completing my sentences. Lay off >.<

Share this post


Link to post
Share on other sites
Why do you have to calculate either? Can't you just store the Z in the normal map along with X and Y (as well as having an RGB texture), or am I missing something here?

Share this post


Link to post
Share on other sites
Quote:
Original post by Monder
Why do you have to calculate either? Can't you just store the Z in the normal map along with X and Y (as well as having an RGB texture), or am I missing something here?


Mainly to conserve memory bandwidth. Almost all texture formats store require that a texel is either 1, 2 or 4 bytes big, this means that an XYZ texture would be stored as an XYZ_ texture(ie. a unused byte on the end), whereas an XY texture would be stored as is, and would take up half the space of the XYZ_. Another possibility is having an RGBX texture and a YZ texture, but that just feels silly.
Admittedly, this would mean my algorithm is most useful in a situation with an alpha channel, eg. RG + XYZA. But combinations of the DXTs can make RG + XYZ worthwhile.

Share this post


Link to post
Share on other sites
Quote:
Another possibility is having an RGBX texture and a YZ texture, but that just feels silly.


Well I'd say it's a matter of whether it'll run at a speed equal to or greater than your method, than whether it feels silly or not.

Plus you can have other info you may want as well stored in textures. E.g. Gloss maps, height maps (for parallax mapping), alpha channels etc.

Sure there will be situations where your method may be worthwhile but remember to consider the other options as well.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!