clip() instrinsic on NVidia graphics cards

Started by
3 comments, last by Nik02 11 years, 9 months ago
I'm currently developing my own graphics engine and decided to try it out on other graphics cards. I'm developing it on an AMD graphics card where it works perfectly fine. But on my friends NVidia card it went completely crazy.
Since I'm using a deferred renderer I decided to just clip materials where the transparency is below 0.5f. I decided not to implement actual transparent materials since deferred renderers don't really support them (even though I'll probably come up with a solution sometime later in the development).
But the clip() instrinsic completely randomly clips pixels away on his graphics card. It even clips entirely different pixels each frame.

Here's how it looks like:
nvidia_clipping_issues.png

And here's the code (reduced to what's important):
[source lang="cpp"]PSOut PSMain(PSIn Input)
{
PSOut Output;

float4 albedo = AlbedoTexture.Sample(Linear, input.Texcoord);
clip(albedo.a - 0.5f);

Output.Albedo = albedo;

return Output;
}[/source]

I'll probably solve it by using Alpha Blending with just the values 0 and 1. But I'm just interested in why this might be happening and if there's still a way to use the clip() instrinsic.
Advertisement
I've seen similar artifacts in a case where the GPU was overheated. If this is not the case, I would suspect faulty GPU memory or a firmware bug.

Also, do verify (with PIX) that the alpha channel of the texture is actually filled correctly.

Niko Suni

...and check depth/stencil state while at it.

Niko Suni


...and check depth/stencil state while at it.

QFE'd/upvoted. Could be a good old fashioned driver bug, too. If it works fine on the reference rasterizer, you've broken the driver somehow. Congrats! ;)
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.
Yea, the artifacts look like semi-random memory, which can simply be the contents of an uninitialized depth buffer (or corrupted memory in the worst case).

There may be some differences between AMD and NVidia drivers about depth buffer validation. For example, one might allow different dimensions for the depth buffer and render target, while the other does not. This particular trait is, however, well defined in D3D spec and it should work the same way on both.

Niko Suni

This topic is closed to new replies.

Advertisement