Jump to content

  • Log In with Google      Sign In   
  • Create Account

ZFighting on ATI, perfect on NVIDIA


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
13 replies to this topic

#1 Plerion   Members   -  Reputation: 368

Like
0Likes
Like

Posted 28 March 2013 - 03:25 AM

Hello

 

I am currently rendering water and i have discovered some issues with z-fighting on ATI. I use glPolygonOffset on the water rendering to avoid z-fighting for the water.

glPolygonOffset(1.1f, 4.0f);
Pipeline::render(gLiquidGeometry, gTexInput);
glPolygonOffset(0, 0);

 

While on NVIDIA water looks perfect without any artifacts like in the following picture for ATI it is completely messed up. Not only do i get weird artifacts but it also "moves" around all the time, you might see that if you move between the 3 pictures which are taken from different camera angles.

 

NVIDIA:

51540b0cb5900_Water_OK.jpg

 

ATI:

51540c2a8cdfc_Water_BAD1.jpg

51540c2a8c93e_Water_BAD2.jpg

51540c2a8cb71_Water_BAD3.jpg

 

So i wondered: What could be the reason for that? Does anyone know any issues with that?

 

Greetings

Plerion


Edited by Plerion, 28 March 2013 - 03:26 AM.


Sponsor:

#2 Hodgman   Moderators   -  Reputation: 31843

Like
0Likes
Like

Posted 28 March 2013 - 04:26 AM

How do you create your z-buffer?



#3 C0lumbo   Crossbones+   -  Reputation: 2497

Like
1Likes
Like

Posted 28 March 2013 - 05:15 AM

This isn't really answering your question but might help you sidestep the problem. I find that the most portable and hassle-free way of z-biasing is to switch to a slightly different projection matrix, typically, I push the near Z out by a small amount.



#4 Plerion   Members   -  Reputation: 368

Like
0Likes
Like

Posted 28 March 2013 - 05:25 AM

Hello C0lumbo

 

I tested that before and got the same problem, working for NVIDIA, not working on my ATI card, sadly.

 

But on a side note, it doesnt matter if i use some anti-zfighting method or not, on the NVIDIA card i never ever have any zfighting at all even on nearly coplanar triangles...

 

@Hodgman:

Which part do you mean? I create the z-buffer in the pixel format with 24 bit depth and 8 bit stencil.

 

PS:
I already had a lot of problems with my ATI card (AMD HD 6990) and OpenGL. The driver had several issues and all that could go wrong went wrong.

 

Greetings

Plerion



#5 Matias Goldberg   Crossbones+   -  Reputation: 3697

Like
3Likes
Like

Posted 28 March 2013 - 10:28 AM

Post G80 hardware, all NVIDIA cards treat half as float, while ATI doesn't. So if you've got a "half" in your shader, make sure it's not related to depth calculation.

Also, on NVIDIA hardware, a D16 depth buffer is just an aliased D32. In ATI isn't.
Use Parallel Nsight & PerfStudio to determine which depth buffer you're actually getting.

In other words, there are multiple places where if you ask the card to give you half precision you will get full, where ati honours your request. The problem is, if you are expecting full precision results...

IIRC debugging shaders in Pix will do half calculations internally as floats, so you won't see some of the overflows or other precision artifacts in pix either. Sucks.

#6 Plerion   Members   -  Reputation: 368

Like
0Likes
Like

Posted 28 March 2013 - 10:54 AM

Hello Matias Goldberg

 

Thank you very much for your hints. Im currently trying to get GPU PerfStudio 2 to work with my application but so far the furthest ive got is "Connecting..." without any notable change....

 

/EDIT:
Ah, seems like the usual issue: Im running on Win8 and nothing works there...

 

Greetings

Plerion


Edited by Plerion, 28 March 2013 - 10:57 AM.


#7 mhagain   Crossbones+   -  Reputation: 8278

Like
1Likes
Like

Posted 28 March 2013 - 10:57 AM

glPolygonOffset is allowed to be implementation-dependent and shouldn't be considered a general purpose "z-fighting fix".  See http://www.opengl.org/sdk/docs/man/xhtml/glPolygonOffset.xml

 

units

Is multiplied by an implementation-specific value to create a constant depth offset. The initial value is 0.

 

So you probably don't have a bug, just conformant (but unwanted and annoying) behaviour. Best to construct your geometry so that it doesn't z-fight in the first place (admittedly not always possible).


Edited by mhagain, 28 March 2013 - 11:13 AM.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#8 Plerion   Members   -  Reputation: 368

Like
0Likes
Like

Posted 28 March 2013 - 12:42 PM

Hello mhagain

 

I disabled all z-fighting measures in my program but still i have the same result, NVIDIA OK, ATI/AMD unbearable. So i guess that its as Matias suspected a z-buffer issue. But sadly i have no chance to get GPU PerfStudio 2 to run or give any output at all and so i have no idea what z-buffer my ATI-Card is using. I also dont know of another method to get that information.

 

Greetings

Plerion



#9 Matias Goldberg   Crossbones+   -  Reputation: 3697

Like
1Likes
Like

Posted 28 March 2013 - 01:21 PM

GPU PerfStudio uses a network connection to connect to your program which is acting as a server (provided you launched your program by drag 'n dropping your exe to GPUPerfServer.exe).

Check that your Firewall isn't blocking GPUPerfServer.exe & your application too for incoming connections.

#10 Plerion   Members   -  Reputation: 368

Like
0Likes
Like

Posted 28 March 2013 - 01:29 PM

Yes, i used the manual that comes with GPU PerfStudio 2. My application starts with the server, the client detects that a server is running with OpenGL. Next i press the pause button in the client, a window pops up "Capturing frame" as caption and "Connecting..." as content. And thats how it remains, no change (The server receives like 10 messages from the client then it halts.)



#11 Matias Goldberg   Crossbones+   -  Reputation: 3697

Like
1Likes
Like

Posted 28 March 2013 - 11:24 PM

Bummer. Sometimes these programs get stuck on something weird we do on our end. Try to see if you can hook PerfStudio to a simple hello world.
If it can't, then it's something driver related or OS

#12 Plerion   Members   -  Reputation: 368

Like
0Likes
Like

Posted 30 March 2013 - 04:09 AM

Hi again

 

i decided to work on my NVIDIA graphics card as long as i cant GPU PerfStudio to work as changing the depth buffer wont interfer with that work. I also realized that my problem seems to bigger than i thought. I added my first models to the scene and there everything is even worse, i already added some tweaking with glPolygonOffset which added a lot more pseudo-precision but the results are still way worse than my previous version of the same graphics data with DirectX. Here i have an image:

5156b7ff5a14a_obj.jpg

 

The red line shows the range in which the model flickers. Where it as now is the minimum and it "moves out of the ground" until it reaches the red line if i pan the camera left and right or move it. This is a huge bummer as im only a few units away from the object (my zNear is 0.5 and zFar is 500) and its already that huge. It feels like im having some sort of 8 bit depth buffer...

 

My pixelformat looks like that:

mPixelFormat.cAlphaBits = mPixelFormat.cRedBits = mPixelFormat.cBlueBits = mPixelFormat.cGreenBits = 8;
mPixelFormat.cColorBits = 32;
mPixelFormat.cDepthBits = 24;
mPixelFormat.cStencilBits = 8;
mPixelFormat.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
mPixelFormat.nSize = sizeof(mPixelFormat);
mPixelFormat.iPixelType = PFD_TYPE_RGBA;

 

/EDIT:
Sadly i still am not able to determine what depth buffer is used.... on nvidia i can get parallel nsight, but it wont work as im using VS 2012 and nsight will support VS 2012 in a few months :|

 

Greetings

Plerion


Edited by Plerion, 30 March 2013 - 05:26 AM.


#13 TheChubu   Crossbones+   -  Reputation: 4766

Like
0Likes
Like

Posted 30 March 2013 - 08:05 AM

For some reason its really difficult to set color/alpha/depth bits of the display in OpenGL (Java + LWJGL wrapper). Half the configurations won't work (probably some relation I don't know).

 

That particular configuration (8 bit alpha, 8 bit stencil, 32 bit color, 24 bit depth) works on my end (OpenGL 3.3 core context, GTX560 Ti).


"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#14 mhagain   Crossbones+   -  Reputation: 8278

Like
0Likes
Like

Posted 31 March 2013 - 05:05 AM

On Windows you can use DescribePixelFormat to see exactly what you're getting.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS