Jump to content
  • Advertisement
Sign in to follow this  
Star_AD

Normalization CubeMap vs normalize()

This topic is 3774 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all! A few days ago I made a demo of dot3bump. When i made the pixel shader, i used the normalize function instead of normalization cubemap. In some forums i had read that in new graphics cards there aren't differences in performance. Is this true? Another question : When i build the vertexs buffers and i found one vertex with two different uv coordinates, there are another solution more efficient than duplicate the vertex? Thxs for replys. PD: Sry for my english. PD: U can view some images of the demo here (Spanish) : http://beerss.spaces.live.com/

Share this post


Link to post
Share on other sites
Advertisement
Don't listen to whatever is written in this paper. As you might have noticed, it is from 2004, and is about the Geforce FX. Todays GPUs have totally different performance characteristics than these old cards.

Today, there is almost no reason to ever use a normalization cubemap anymore.

Share this post


Link to post
Share on other sites
Infact, there is a later paper which basically says 'clock speed is going to keep going up, bandwidth is limited, favor ALU ops over texture ops where you can'; normalising would be such a case.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!