• Advertisement
Sign in to follow this  

Zbuffer issues on nVidia Cards(zip file link included)

This topic is 4550 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ive been working on a graphics framework for several months on and off, and decided to send a sample to my friends to test. All of those who were using nVidia cards had an OpenGL Error: invalid Operation and showed ztesting problems. It runs fine on my Radeon 9600Pro and OpenGL shows no errors. So i would like nvidia users to help me test it out. Also anyone who has experienced such a problem exclusively on nvidia hardware, please tell me about what im doing wrong. Ive looked through my code and cant find any problems with it. PS. If any of you have GLIntercept, please help me test it with your nvidia card so that i can know where the problem is. Download Del key rotates camera around model.

Share this post


Link to post
Share on other sites
Advertisement
It seems you compiled it in debug mode, as it wants MSVCP71D.dll. I don't have that, and you included the release versions :(

Share this post


Link to post
Share on other sites
Hmmm... the program doesnt run on my machine at all. The MainCode window comes up, but its all black, and i get the "MainCode.exe has encountered a problem and needs to close blablabla" and asks me if i want to send an error report to MS. Im running (or trying to) this on my Dell Inspiron 500m on Windows XP. The graphics card is one of those onboard intel ones.

Share this post


Link to post
Share on other sites
Ok, now what are your zNear and zFar clipping plane values set to?

Since each vertex is projected into a canonical view, which allows Z coordinates to take on a value between -1 and +1, if your near and far values are too far apart, you're going to run into some floating-point precision problems, which might be responsible for the artifacts evident in the screenshot posted above.

Share this post


Link to post
Share on other sites

This is with my heavily overclocked 6800GT using 77.77 drivers. The bottom right corner is just fraps, which is a FPS monitoring program I usually have running.

Dunno if the face is supposed to look like that (not a result of OCing after checking). One thing I found odd was that vsync didnt kick in. I have my card set to globally vsync instead of application controlled so it should have capped at 75 FPS and not 4254, wouldnt know why that is happening. Dunno if any of that can help, good luck. I did not get any errors however.

Share this post


Link to post
Share on other sites
It runs in a window, can't have VSync on then.

To me it looks like the Z Buffer isn't even on, and the polygons are being drawn in the wrong order. The black thing you see on the persons face is part of the hair.

Share this post


Link to post
Share on other sites
I'm dead sure thats because your depth testing is off, try glDepthMask( GL_TRUE ) and/or glEnable( GL_DEPTH_TEST ) followed by glDepthFunc( GL_LEQUAL ) (call just before rendering your mesh)

EDIT: interesting, i just fired up gDEBugger and well, your depth tests/masks are all enabled, few things:
  • a little obscure you have your depth func as GL_LESS, tried GL_LEQUAL?

  • your GL_DEPTH_BITS is 0... seems your depth buffer is likely not part of the requested pixel format, check this!


  • Cheers
    -Danu

    Share this post


    Link to post
    Share on other sites
    Thanks silvermace, ill look into my window code and recompile it and post it up here for testing.

    Share this post


    Link to post
    Share on other sites
    In my code, im requesting for a 16-bit depth buffer with no stencil buffer, but my system gets a 24-bit buffer. Somehow, nVidia defaults to no depth buffer if it doesnt support the depth requested. What is the recommended bit depth for a depth buffer on todays cards? Won't 32 be ideal since in 24 bits the additional 8 bits would go to waste.

    Edit: Argh it was a simple mistake, for windowed mode, i was passing in 0 for the depth buffer. Ive recompiled it requesting for 24-bits. It should work now.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by GamerSg
    Edit: Argh it was a simple mistake, for windowed mode, i was passing in 0 for the depth buffer. Ive recompiled it requesting for 24-bits. It should work now.

    How come that works on ATI cards though?

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Daggett
    Quote:
    Original post by GamerSg
    Edit: Argh it was a simple mistake, for windowed mode, i was passing in 0 for the depth buffer. Ive recompiled it requesting for 24-bits. It should work now.

    How come that works on ATI cards though?
    sub-standard (only just) opengl drivers, there are many interesting tid-bits with ATI GL drivers, search the forums, you'll see what i mean.

    top three which i noticed for this week:
    • wglMakeCurrent has massive memory leak (this one is actually quite old, but it still hasn't been fixed)

    • glDrawRangeElements in GL_SELECT mode performs pourly with mid-high poly count

    • this interesting (default?) behaviour

    Share this post


    Link to post
    Share on other sites
    Sign in to follow this  

    • Advertisement