Jump to content

View more

Image of the Day

The night is still, but the invasion brings chaos. #screenshotsaturday #hanako #indiegame #gameart #ue4 #samurai https://t.co/cgILXuokoS
IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

Sign up now

nVidia bug on texture depth?

4: Adsense

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 Retsu90   Members   


Posted 08 October 2012 - 05:38 AM

I've noticed a possible bug on nVIdia cards. I'm developing a game that handle 3D textures and when I need to use a layer from that 3D texture, I use a formula like this:
z = 1.0f/textureDepth * layer
On my home computer this formula works without problems (I'm using a ATI Radeon 4800 series) and it render the layer that I want, but on nVidia (and also on Intel HD 3000) it doesn't. The bugfix can be resolved editing that formula:
z = 1.0f/textureDepth * layer + 0.00001
Someone noticed this before? I can't find anything about it on gamedev and also on Google...

EDIT: This problem happens when the texture depth is an odd number

Edited by Retsu90, 08 October 2012 - 05:40 AM.

#2 Ashaman73   Members   


Posted 08 October 2012 - 06:07 AM

Different videocard-chip ventors handle errors differently, so one driver could be more robust to your errors than others, still it is most likely an error. Posted Image

What happens when textureDepth or layer is 0 ? In this case you would have 1.0f/0.0f which is invalid. Adding a small epsilon like 0.00001 will work as long as textureDepth * layer gets not negative.

Edited by Ashaman73, 08 October 2012 - 06:09 AM.



Gnoblins: Website - Facebook - Twitter - Youtube - Steam Greenlit - IndieDB - Gamedev Log

#3 mhagain   Members   


Posted 08 October 2012 - 06:08 AM

Possible floating point precision problem - it's not a bug, just that some (older?) NVIDIA drivers will optimize shader code down to 16 bit FP precision if their compiler thinks it can get away with it.

Try using "layer / textureDepth" instead - it's mathematically equivalent but should preserve precision better.

Edited by mhagain, 08 October 2012 - 06:09 AM.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.

#4 Geometrian   Members   


Posted 08 October 2012 - 09:15 AM

Hold up.

What you probably intended was:
z = 1.0f/textureDepth * (layer+0.5)

Because texture samples should be in the center of the texels. If you're already taking this into account in "layer", nevermind; carry on.
And a Unix user said rm -rf *.* and all was null and void...|There's no place like|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.

#5 Matias Goldberg   Members   


Posted 08 October 2012 - 06:24 PM

If textureDepth == 0, you're gonna have baaad time.

BTW, don't you mean?
z = 1.0f/(textureDepth + 0.00001) * layer
(note the parenthesis and change of order)

And also... not enough information. Which GeForce GPU did you try? Are those variables all float or half?
For example GF 8000 series will convert to float, but GF 6 & 7 series will respect the 'half' variable. Halfs overflow much faster than floats, hence you'll get to infinity through the division with surprisingly not-so-close-to-zero values.

Also if you're using Cg; it wrongly allows you to write to just one output (i.e. return float) while PS 3.0 strictly says all pixel shaders must return a float4 value per render target (i.e. which may happen when you write to the depth texture). Not writing to all outputs is undefined and will cause weird results in Intel cards.
Check if DX Debug runtimes have something to say.

#6 Retsu90   Members   


Posted 11 October 2012 - 01:08 PM

(layer+.5f) / textureDepth resolved the problem! It's a good idea to take the Z in the middle of the texture!
However I'm using a GeForce 620M.
There are no possibilities that textureDepth is 0 due to some checks (textureDepth is a private member in my class)

Edited by Retsu90, 11 October 2012 - 01:08 PM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.