NVidia and ATI standard inconsistencies?

Started by
2 comments, last by proanim 11 years, 4 months ago
I've got some troubles making my (C++) OpenGL project to work on ATI cards. I've only got NVidia cards myself so it's rather difficult to track down the exact point of failure. So sorry for the lack of code samples.

I'm only using OpenGL 3.3 core features, VBOs and that whole kit of stuff. There's no problems on any NVidia card I've tested so far, which is about 5 different models of varying age, some laptops some desktops.
SDL1.2 for input, window handling and context creation.

I've tested on two ATI Radeon mobility cards and both failed to run my project properly.
The first one, Radeon HD 5650, starts my project alright and renders most things. I'm using a RUI8 texture and I'm attaching that to a framebuffer for direct rendering to it. This seems to be failing on this ATI card since the feature using it simply fails silently, while all other 'usual' GL rendering works fine. I'm using the RUI8 texture more or less as a bitmask. I am checking for FB completion and there are no other errors from GL.

The second ATI card, 7670M won't even pass the test for GL3.3, which I'm doing with GLEW, even though it's GL4.1 card!

glewInit();
if( !GLEW_VERSION_3_3 )
{
cout << "OpenGL v3.3 required.";
exit(EXIT_FAILURE);
}

Goes without saying that all GL4.1 NVidia cards I've tested passed this.

I've tried searching for documented differences between these two manufacturers, but I've failed to find anything. Is NVidia more lenient so that my code perhaps shouldn't work in its current state according to the GL specs? Or is ATI patchy on less used features such as uint textures, at least on mobility/laptop cards? I do know that ATI is more strict with shader compilation, but my shaders are being compiled successfully on the ATI cards (I'm doing all checks).

Anyone encountered similar or any differences between NVidia and ATI regarding the standard core opengl 3.3?
Advertisement
Yup, this happens. General rule of thumb is that NV accept things they shouldn't, ATI/AMD don't accept things they should, and Intel is bandit country. There are exceptions, of course, so that's a guideline rather than an absolute "way things are". (Worth nothing that things are getting gradually better with all 3 vendors too, but the lack of up to date GL conformance tests doesn't really help much.)

So, maybe you're doing something that violates spec but your NV is letting it happen, or maybe you're conformant but the AMD/ATI is incorrectly failing. It's difficult to tell, and I don't think that troubleshooting such a one-off incident is going to be of long-term benefit to you. You really need to get hold of an AMD/ATI machine for yourself, otherwise you're going to run into more such issues in future.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Try something like gdebugger to see if it can detect any usage errors.
Try using compatibility profile instead of core profile with ati when i tried using glew there were problems with core profile, so maybe thats the case here. It has something to do with glew having bugs in core profile.

This topic is closed to new replies.

Advertisement