OpenGL / ATI GLSL Color issue

Started by
2 comments, last by zedz 14 years, 10 months ago
Hi! at work I am experiencing a difference of behaviour between the nvidia and ati implementations of GLSL: Somehow the color vertex attribute is not transferred to the graphics card correctly. I assume there is something with the RGBA32 format. To clarify what I mean, here are too screenies: How it should look like: (I am sorry just an unspectacular 2D GL Framework graphics test) Free Image Hosting at www.ImageShack.us How it looks on ATI (PCI-E Sapphire Radeon HD 3870 with the newest catalyst). So only the colors go wrong - right ? I am binding the color array with this: glEnableClientState(GL_COLOR_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, buffer->getHardwareId()); glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(Color), (GLvoid*)0); The shader looks like this: varying vec4 fragmentColor; void main() { gl_Position = ftransform(); fragmentColor = gl_Color; } and this: varying vec4 fragmentColor; void main() { gl_FragColor = fragmentColor; } Can´t get easier ! I have no idea what causes this difference in behaviour, and i also get no error messages and I can´t use floats for color values. Guess that would work, because the position and texture coordinate attributes work well on ATI. Does anybody know about this NVIDIA - ATI issue ? Thanks alot, Frederick
Advertisement
Hard to say without more info but
im guessing youre not supplying the right data (nvidia is also failing its just appears coincidently to be correct)

sizeof(Color) is the stride between pieces of data, perhaps u mean 0, esp since youve also written (GLvoid*)0

try it first with standard VAs + once youve got that working try VBOs
Hi zedz,

thank you very much that you took the time to look at my problem. The information was sparse, but you were soooo right. That was the mistake. When I wrote the code (nearly a year ago) I must have had the concept of "stride" mistaken with the size of the complete vertex structure. I remember now that I fixed it in my private code some time ago, but not in the codebase at my work.

It is so kind that you looked it over, I would have never found it by myself, because I assumed the GL code to work. Guess, it has been working a year like that now - shame on me =)

The funniest part of the thing was my Color structure.

Actually it is a class, like that:
class Color
{
public:
inline Color();
inline Color(GLubyte r, GLubyte g, GLubyte b);


inline Color& operator++();

private:
GLubyte r, g, b, a;
};

One time I realized that the operator++ (which is needed to create consecutive color picking ids) may overflow in a long session. So I extended the class like this:

[...]

private:
GLubyte r, g, b, a;
bool overflow; <--- OUCCHHH
};


And thats why it worked. Magic coincidence. I puzzled my memory layout without knowing it, and it still worked. NVIDIA must have some magic under the hood or something.

I repeated the stride misconception with vertex postion, and texture coordinates too. And guess, it worked too by coincidence, there were other mistakes compesating for that. Unbelievable !

Thank you soo much. You literally saved me lots of time and trouble.

Feel hugged!

Thanks,
Frederick
>>I puzzled my memory layout without knowing it, and it still worked. NVIDIA must have some magic under the hood or something.

nvidia are a lot slacker about what they allow to work
unfortunately for ati this gives the impression that their drivers are buggerery/more broken than they are. as its human nature for ppl to appropriate the blame onto the machine where it doesnt work

This topic is closed to new replies.

Advertisement