• Advertisement
Sign in to follow this  

nVidia bug on texture depth?

This topic is 1926 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,
I've noticed a possible bug on nVIdia cards. I'm developing a game that handle 3D textures and when I need to use a layer from that 3D texture, I use a formula like this:
[CODE]z = 1.0f/textureDepth * layer[/CODE]
On my home computer this formula works without problems (I'm using a ATI Radeon 4800 series) and it render the layer that I want, but on nVidia (and also on Intel HD 3000) it doesn't. The bugfix can be resolved editing that formula:
[CODE]z = 1.0f/textureDepth * layer + 0.00001[/CODE]
Someone noticed this before? I can't find anything about it on gamedev and also on Google...

EDIT: This problem happens when the texture depth is an odd number Edited by Retsu90

Share this post


Link to post
Share on other sites
Advertisement
Different videocard-chip ventors handle errors differently, so one driver could be more robust to your errors than others, still it is most likely an error. [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

What happens when textureDepth or layer is 0 ? In this case you would have 1.0f/0.0f which is invalid. Adding a small epsilon like 0.00001 will work as long as textureDepth * layer gets not negative. Edited by Ashaman73

Share this post


Link to post
Share on other sites
Possible floating point precision problem - it's not a bug, just that some (older?) NVIDIA drivers will optimize shader code down to 16 bit FP precision if their compiler thinks it can get away with it.

Try using "layer / textureDepth" instead - it's mathematically equivalent but should preserve precision better. Edited by mhagain

Share this post


Link to post
Share on other sites
Hold up.

What you probably intended was:
[CODE]z = 1.0f/textureDepth * (layer+0.5)[/CODE]

Because texture samples should be in the center of the texels. If you're already taking this into account in "layer", nevermind; carry on.

Share this post


Link to post
Share on other sites
If textureDepth == 0, you're gonna have baaad time.

BTW, don't you mean?
z = 1.0f/(textureDepth + 0.00001) * layer
(note the parenthesis and change of order)

And also... not enough information. Which GeForce GPU did you try? Are those variables all float or half?
For example GF 8000 series will convert to float, but GF 6 & 7 series will respect the 'half' variable. Halfs overflow much faster than floats, hence you'll get to infinity through the division with surprisingly not-so-close-to-zero values.

Also if you're using Cg; it wrongly allows you to write to just one output (i.e. return float) while PS 3.0 strictly says all pixel shaders must return a float4 value per render target (i.e. which may happen when you write to the depth texture). Not writing to all outputs is undefined and will cause weird results in Intel cards.
Check if DX Debug runtimes have something to say.

Share this post


Link to post
Share on other sites
(layer+.5f) / textureDepth resolved the problem! It's a good idea to take the Z in the middle of the texture!
However I'm using a GeForce 620M.
There are no possibilities that textureDepth is 0 due to some checks (textureDepth is a private member in my class) Edited by Retsu90

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement