Graphics glitch on intel integrated graphics

Started by
4 comments, last by swiftcoder 14 years, 10 months ago
Hi, I was testing my current project on some different machines to help ferret out problems, and I found one. Here's a picture of a satellite being rendered correctly on the debug screen in my project: rendered correctly All is well! However, as I zoom out more and more in game, I start to see artifacts on it! artifact! I have two other machines with a GeForce card and an ATI card and they do not have the problem. I feel like its a problem with the depth buffer, but I don't know why I wouldn't see it on the other machines as well. Or could it be a problem with the integrated Intel graphics?
scottrick49
Advertisement
Looks like z-fighting indeed. What depth buffer precision does the app use with that Intel chip?
If I was helpful, feel free to rate me up ;)If I wasn't and you feel to rate me down, please let me know why!
Quote:What depth buffer precision does the app use with that Intel chip?


Not sure how to check this? I'm not changing any of the OpenGL setup / configuration based on the video card, or is this entirely card dependent?
scottrick49
to see if it is zfighting (Im not 100% certain)
try gluPerspective( X,X, near *10.0, far );
i.e. make the near clipplane 10x larger
Try glGetIntegerv to known the depth bits
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Quote:Original post by scottrick49
Quote:What depth buffer precision does the app use with that Intel chip?
Not sure how to check this? I'm not changing any of the OpenGL setup / configuration based on the video card, or is this entirely card dependent?
You request a specific number of depth bits when you create the OpenGL context - thus the method varies depending on your windowing toolkit.

If you don't request a specific number of bits, the driver will pick a default, and the Intel driver may be defaulting lower than the ATI. The Intel card should support a higher bit depth though, so try requesting one. For instance, my Intel X3100 (on a Mac) supports both 16 and 24 bit depth, and appears to default to 16.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

This topic is closed to new replies.

Advertisement