Sign in to follow this  
Plerion

ZFighting on ATI, perfect on NVIDIA

Recommended Posts

Plerion    381

Hello

 

I am currently rendering water and i have discovered some issues with z-fighting on ATI. I use glPolygonOffset on the water rendering to avoid z-fighting for the water.

glPolygonOffset(1.1f, 4.0f);
Pipeline::render(gLiquidGeometry, gTexInput);
glPolygonOffset(0, 0);

 

While on NVIDIA water looks perfect without any artifacts like in the following picture for ATI it is completely messed up. Not only do i get weird artifacts but it also "moves" around all the time, you might see that if you move between the 3 pictures which are taken from different camera angles.

 

NVIDIA:

51540b0cb5900_Water_OK.jpg

 

ATI:

51540c2a8cdfc_Water_BAD1.jpg

51540c2a8c93e_Water_BAD2.jpg

51540c2a8cb71_Water_BAD3.jpg

 

So i wondered: What could be the reason for that? Does anyone know any issues with that?

 

Greetings

Plerion

Edited by Plerion

Share this post


Link to post
Share on other sites
C0lumbo    4411

This isn't really answering your question but might help you sidestep the problem. I find that the most portable and hassle-free way of z-biasing is to switch to a slightly different projection matrix, typically, I push the near Z out by a small amount.

Share this post


Link to post
Share on other sites
Plerion    381

Hello C0lumbo

 

I tested that before and got the same problem, working for NVIDIA, not working on my ATI card, sadly.

 

But on a side note, it doesnt matter if i use some anti-zfighting method or not, on the NVIDIA card i never ever have any zfighting at all even on nearly coplanar triangles...

 

@Hodgman:

Which part do you mean? I create the z-buffer in the pixel format with 24 bit depth and 8 bit stencil.

 

PS:
I already had a lot of problems with my ATI card (AMD HD 6990) and OpenGL. The driver had several issues and all that could go wrong went wrong.

 

Greetings

Plerion

Share this post


Link to post
Share on other sites
Matias Goldberg    9576
Post G80 hardware, all NVIDIA cards treat half as float, while ATI doesn't. So if you've got a "half" in your shader, make sure it's not related to depth calculation.

Also, on NVIDIA hardware, a D16 depth buffer is just an aliased D32. In ATI isn't.
Use Parallel Nsight & PerfStudio to determine which depth buffer you're actually getting.

In other words, there are multiple places where if you ask the card to give you half precision you will get full, where ati honours your request. The problem is, if you are expecting full precision results...

IIRC debugging shaders in Pix will do half calculations internally as floats, so you won't see some of the overflows or other precision artifacts in pix either. Sucks.

Share this post


Link to post
Share on other sites
Plerion    381

Hello Matias Goldberg

 

Thank you very much for your hints. Im currently trying to get GPU PerfStudio 2 to work with my application but so far the furthest ive got is "Connecting..." without any notable change....

 

/EDIT:
Ah, seems like the usual issue: Im running on Win8 and nothing works there...

 

Greetings

Plerion

Edited by Plerion

Share this post


Link to post
Share on other sites
mhagain    13430

glPolygonOffset is allowed to be implementation-dependent and shouldn't be considered a general purpose "z-fighting fix".  See http://www.opengl.org/sdk/docs/man/xhtml/glPolygonOffset.xml

 

units

Is multiplied by an implementation-specific value to create a constant depth offset. The initial value is 0.

 

So you probably don't have a bug, just conformant (but unwanted and annoying) behaviour. Best to construct your geometry so that it doesn't z-fight in the first place (admittedly not always possible).

Edited by mhagain

Share this post


Link to post
Share on other sites
Plerion    381

Hello mhagain

 

I disabled all z-fighting measures in my program but still i have the same result, NVIDIA OK, ATI/AMD unbearable. So i guess that its as Matias suspected a z-buffer issue. But sadly i have no chance to get GPU PerfStudio 2 to run or give any output at all and so i have no idea what z-buffer my ATI-Card is using. I also dont know of another method to get that information.

 

Greetings

Plerion

Share this post


Link to post
Share on other sites
Plerion    381

Yes, i used the manual that comes with GPU PerfStudio 2. My application starts with the server, the client detects that a server is running with OpenGL. Next i press the pause button in the client, a window pops up "Capturing frame" as caption and "Connecting..." as content. And thats how it remains, no change (The server receives like 10 messages from the client then it halts.)

Share this post


Link to post
Share on other sites
Plerion    381

Hi again

 

i decided to work on my NVIDIA graphics card as long as i cant GPU PerfStudio to work as changing the depth buffer wont interfer with that work. I also realized that my problem seems to bigger than i thought. I added my first models to the scene and there everything is even worse, i already added some tweaking with glPolygonOffset which added a lot more pseudo-precision but the results are still way worse than my previous version of the same graphics data with DirectX. Here i have an image:

5156b7ff5a14a_obj.jpg

 

The red line shows the range in which the model flickers. Where it as now is the minimum and it "moves out of the ground" until it reaches the red line if i pan the camera left and right or move it. This is a huge bummer as im only a few units away from the object (my zNear is 0.5 and zFar is 500) and its already that huge. It feels like im having some sort of 8 bit depth buffer...

 

My pixelformat looks like that:

mPixelFormat.cAlphaBits = mPixelFormat.cRedBits = mPixelFormat.cBlueBits = mPixelFormat.cGreenBits = 8;
mPixelFormat.cColorBits = 32;
mPixelFormat.cDepthBits = 24;
mPixelFormat.cStencilBits = 8;
mPixelFormat.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
mPixelFormat.nSize = sizeof(mPixelFormat);
mPixelFormat.iPixelType = PFD_TYPE_RGBA;

 

/EDIT:
Sadly i still am not able to determine what depth buffer is used.... on nvidia i can get parallel nsight, but it wont work as im using VS 2012 and nsight will support VS 2012 in a few months :|

 

Greetings

Plerion

Edited by Plerion

Share this post


Link to post
Share on other sites
TheChubu    9447

For some reason its really difficult to set color/alpha/depth bits of the display in OpenGL (Java + LWJGL wrapper). Half the configurations won't work (probably some relation I don't know).

 

That particular configuration (8 bit alpha, 8 bit stencil, 32 bit color, 24 bit depth) works on my end (OpenGL 3.3 core context, GTX560 Ti).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this