Jump to content
  • Advertisement
Sign in to follow this  
Dragon_Strike

pre calculating binormals or not?

This topic is 3955 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

im unsure how i should go about this... should i pre calculate binormals and store them in the vertexbuffer or should i calculate them in the vertex shader... so it basically means 25% more data in the vertexbuffer or 1 extra cross() for each vertex... im unsure which would be better?

Share this post


Link to post
Share on other sites
Advertisement
Recent GPU architectures tend to favour arithmetic rather than I/O operations - I read that the G80 series can be optimal at a 10:1 ratio. I'd suspect this trend to continue and for memory latency to increase relatively.

Ok, so that's more a concern for texturing, but I would be confident that a 25% increase in bandwidth usage would be much worse than a few extra arithmetic instructions.

Run up something like NVPerfHUD to try and guage where your application is bottlenecked. If you're already memory-bound then you might find the extra ALU ops are effectively free, even after reducing the bottleneck...

hth
Jack

Share this post


Link to post
Share on other sites
From investigation into using the GS I would say its a reasonable but less compelling option. The general consensus is that the GS on first-gen hardware wasn't quite up to a heavy workload and, at least in D3D10, its hard to implement decent smoothing groups (etc..) from the GS+Adjacency mode so you can potentially still get better results from pre-generating on the CPU. Then again, a smoothed TBN doesn't necessarily make sense...

If you're implementing relief or parallax mapping then you need height-map data and its a simple enough step to use a sobel filter to avoid the need for a normal map entirely. Conceptually its a bit cleaner (storing just source data is nicer than source+derived) but you are still just talking about a balance between 9 texture reads of one texture, versus 8+1 reads over 2 textures (+additional storage costs)... so its not quite a clear cut decision and depends on where your constraints and priorities lie.

hth
Jack

Share this post


Link to post
Share on other sites
thx for the answers... ive got a last one... would it make sense to send normals and tangents without y coordinates... if they are normalize it should be fairly simple to extract these values... and it would require ~15% less data...

im mainly asking these question because i havent seen anyone do it... and im wondering why...

Share this post


Link to post
Share on other sites
Compressing 3D vectors down to 2D is a perfectly valid optimization, you just have to realise one potential issue...

Reconstructing the Y coordinate would be Y = sqrt(1-X2-Y2) and by definition square root can be positive or negative. Therefore you have to be careful as you could get the opposite vector to the one you really want [wink]

Storing normal maps in 2-channel format can be a easy win because you know that in tangent space the normal should have a +Z component making it trivial to determine Z from the X and Y stored in the texture...

hth
Jack

Share this post


Link to post
Share on other sites
I'm just curious where you're getting numbers like '25% less data'. You should really be using compressed geometry, whenever you can. That will save you quite a bit of VB size right away ... then you can make better decisions about where it's worth removing components.

TBN data is an especially good candidate for compression, because the range is extremely limited. We use 4-bytes for each vector (not each component!), so our entire TBN matrix fits in 12 bytes of vertex data. I've thought about computing one of those components in the shader, but I'd be saving 4 bytes out of 32-40 (our typical sizes).

Share this post


Link to post
Share on other sites
There is another reason to precompute your full texture axes rather than calculate in shader... If your texture UV coords contain effectively mirrored values, your calculated binormal will point the wrong way, so your 'bumps' will look like dips instead. For this reason I think it might be best to precompute, but perhaps attempt other compression.

Share this post


Link to post
Share on other sites
Yes, mirroring is a problem, but it's solvable. The simplest solution is to just store a single extra bit in your vertex stream somewhere (if you can steal a bit from some other quantity or something), and use that to encode a +-1 value which is multiplied after you've reconstructed the NxT vector.

There is also an interesting article in ShaderX5 about computing both the T and B vectors inside the pixel shader, using ddx and ddy instructions. The fastest version (with some quality tradeoff) was something like 14 instructions. The interesting side-effect of doing things that way is that you have fewer potential discontinuities in your vertex data, so you might have to split fewer verts.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!