i decided to work on my NVIDIA graphics card as long as i cant GPU PerfStudio to work as changing the depth buffer wont interfer with that work. I also realized that my problem seems to bigger than i thought. I added my first models to the scene and there everything is even worse, i already added some tweaking with glPolygonOffset which added a lot more pseudo-precision but the results are still way worse than my previous version of the same graphics data with DirectX. Here i have an image:
The red line shows the range in which the model flickers. Where it as now is the minimum and it "moves out of the ground" until it reaches the red line if i pan the camera left and right or move it. This is a huge bummer as im only a few units away from the object (my zNear is 0.5 and zFar is 500) and its already that huge. It feels like im having some sort of 8 bit depth buffer...
My pixelformat looks like that:
mPixelFormat.cAlphaBits = mPixelFormat.cRedBits = mPixelFormat.cBlueBits = mPixelFormat.cGreenBits = 8;
mPixelFormat.cColorBits = 32;
mPixelFormat.cDepthBits = 24;
mPixelFormat.cStencilBits = 8;
mPixelFormat.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
mPixelFormat.nSize = sizeof(mPixelFormat);
mPixelFormat.iPixelType = PFD_TYPE_RGBA;
Sadly i still am not able to determine what depth buffer is used.... on nvidia i can get parallel nsight, but it wont work as im using VS 2012 and nsight will support VS 2012 in a few months :|
Edited by Plerion, 30 March 2013 - 05:26 AM.