nVidia bug on texture depth?

Started by
4 comments, last by Xeeynamo 11 years, 6 months ago
Hi,
I've noticed a possible bug on nVIdia cards. I'm developing a game that handle 3D textures and when I need to use a layer from that 3D texture, I use a formula like this:
z = 1.0f/textureDepth * layer
On my home computer this formula works without problems (I'm using a ATI Radeon 4800 series) and it render the layer that I want, but on nVidia (and also on Intel HD 3000) it doesn't. The bugfix can be resolved editing that formula:
z = 1.0f/textureDepth * layer + 0.00001
Someone noticed this before? I can't find anything about it on gamedev and also on Google...

EDIT: This problem happens when the texture depth is an odd number
Advertisement
Different videocard-chip ventors handle errors differently, so one driver could be more robust to your errors than others, still it is most likely an error. smile.png

What happens when textureDepth or layer is 0 ? In this case you would have 1.0f/0.0f which is invalid. Adding a small epsilon like 0.00001 will work as long as textureDepth * layer gets not negative.
Possible floating point precision problem - it's not a bug, just that some (older?) NVIDIA drivers will optimize shader code down to 16 bit FP precision if their compiler thinks it can get away with it.

Try using "layer / textureDepth" instead - it's mathematically equivalent but should preserve precision better.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Hold up.

What you probably intended was:
z = 1.0f/textureDepth * (layer+0.5)

Because texture samples should be in the center of the texels. If you're already taking this into account in "layer", nevermind; carry on.

[size="1"]And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.
[size="2"]

If textureDepth == 0, you're gonna have baaad time.

BTW, don't you mean?
z = 1.0f/(textureDepth + 0.00001) * layer
(note the parenthesis and change of order)

And also... not enough information. Which GeForce GPU did you try? Are those variables all float or half?
For example GF 8000 series will convert to float, but GF 6 & 7 series will respect the 'half' variable. Halfs overflow much faster than floats, hence you'll get to infinity through the division with surprisingly not-so-close-to-zero values.

Also if you're using Cg; it wrongly allows you to write to just one output (i.e. return float) while PS 3.0 strictly says all pixel shaders must return a float4 value per render target (i.e. which may happen when you write to the depth texture). Not writing to all outputs is undefined and will cause weird results in Intel cards.
Check if DX Debug runtimes have something to say.
(layer+.5f) / textureDepth resolved the problem! It's a good idea to take the Z in the middle of the texture!
However I'm using a GeForce 620M.
There are no possibilities that textureDepth is 0 due to some checks (textureDepth is a private member in my class)

This topic is closed to new replies.

Advertisement