Upgraded from GFX5200 to 9800 Pro, need help!

Started by
16 comments, last by MARS_999 18 years, 4 months ago
Ive got myself a Radeon 9800 Pro to replace my old GeForce 5200, anyway in my main development project (a space combat/exploration game) I draw the sun and lense flares with 2d textured quads and blending enabled which worked fine on the old card but on the ATI I can see the part of the square that should be transparent, i.e. the circle which I want to see and a dim square around it. I'll post code if needed but my main concern is that I thought opengl was more or less standardised across different cards. I realise that they are different manufacturers and drivers etc but Im not using any fancy extensions or anything just basic blending and textured quads and theres a big difference in what each card produces. Please help, I've lost a little faith in open gl!!!
Advertisement
Quote:Original post by comedypedro
Ive got myself a Radeon 9800 Pro to replace my old GeForce 5200, anyway in my main development project (a space combat/exploration game) I draw the sun and lense flares with 2d textured quads and blending enabled which worked fine on the old card but on the ATI I can see the part of the square that should be transparent, i.e. the circle which I want to see and a dim square around it. I'll post code if needed but my main concern is that I thought opengl was more or less standardised across different cards. I realise that they are different manufacturers and drivers etc but Im not using any fancy extensions or anything just basic blending and textured quads and theres a big difference in what each card produces. Please help, I've lost a little faith in open gl!!!


It is standardized. What probably happened is that you were relying on a specific artifact of one card that is allowed by the standardization. It's like relying on a Microsoft VC++ specific or GCC specific not-quite-bug.

It is most likely your texture or rendering code is not quite right, you just didn't notice it as much on the old card.

frob.
Thanks for your reply. I understand what you mean when you compare it to relying on MS VC to be exactly the same as the C++ standard, but the output from the two cards is very different indeed and I would have thought the opengl standardisations (if thats the right word) would be tighter. I mean if I was running an ancient TNT card or something and went up to a state of the art SLI setup I'd except some differences but the 5200 and the 9800 are same generation boards.

Im going to play around with it a bit to get it working and Im also going to keep the current build to try out on other machines/cards.

Thanks again for replying and any further comments are very welcome.
This sounds like it could be something like having alternate texel centers turned on for one of your cards and not the other. Look in your drivers for something like that; I can't remember if it is in both ATI and NVIDIA drivers or just one of them. Having this turned on makes OpenGL sample from the center of a texel instead of the bottom-left corner and depending on your wrap mode this could end up making the texture blend with the border on one card and not on the other if you are using bilinear filtering.

Other than that I think we will need to see some code and possibly a screenshot.
I think I know what youre talking about and I'll look into the driver settings but its the whole square I can see not just the border. I'm looking into posting a picture (can I upload a pic to GameDev or do I need to find somewhere else to host??) and then I'll post the code too.

Cheers
What is your near/far plane set to? This might be a depth precision problem. Also you might want to try and disable the depth buffer when rendering that texture and see if that helps...

glDepthMask(false);
glDepthMask(true);
Quote:Original post by comedypedro
Thanks for your reply. I understand what you mean when you compare it to relying on MS VC to be exactly the same as the C++ standard, but the output from the two cards is very different indeed and I would have thought the opengl standardisations (if thats the right word) would be tighter.

That has nothing to do with standarization. If you use an API in an incorrect way or rely on undefined behaviour, then the results will be unpredictable. The standard is very clear on what is defined and what not. In the C example, this is like relying on the value of an undefined variable, and then complaining about the language being non-standarized, when your code crashes.

Either your background alpha is not completely transparent (ie. zero), and your 5200 allocated an RGBA5551 texture (essentially making it transparent by truncation), while the 9800 allocates an RGBA8 texture.

Or you have done something wrong with the texenv combine pipeline, and have some non-zero alpha leak in somewhere. Post your code.

Also, did you enable fullscreen antialiasing on the 9800 ?

Quote:Original post by Kalidor
This sounds like it could be something like having alternate texel centers turned on for one of your cards and not the other. Look in your drivers for something like that; I can't remember if it is in both ATI and NVIDIA drivers or just one of them. Having this turned on makes OpenGL sample from the center of a texel instead of the bottom-left corner

OpenGL samples from the center by default.

Quote:Original post by MARS_999
What is your near/far plane set to? This might be a depth precision problem.

From his problem description, this is most definitely not a depthbuffer problem.
Quote:Original post by Yann L
That has nothing to do with standarization. If you use an API in an incorrect way or rely on undefined behaviour, then the results will be unpredictable. The standard is very clear on what is defined and what not. In the C example, this is like relying on the value of an undefined variable, and then complaining about the language being non-standarized, when your code crashes.
...
From his problem description, this is most definitely not a depthbuffer problem.


On the first bit of quote, for the OP, there are three thingsthe standard allows: DB, IB, and UB. Defined behavior, implementation defined behavior, and undefined behavior.

Obviously the bug is relying on either IB or UB. And I agree that it isn't a depth buffer problem based on the description. It is almost certainly a texture rasterization issue dealing with the alpha values.

Without seeing code, though, it's just a guess.

My first thought was exactly what you mentioned. It might be from the conversion to the card's internal color format (the 5551 conversion), either through an incorrect internal format or source image format. Those conversions and supported formats are implementation defined. I doubt this would be the cause, though, since both cards are great and handling that if the rendering contexts are similarly set.

My second thought was that it was a driver setting. Most video card drivers allow forcing certain values.

My third thought was that the app is incorrectly enumerating and obtaining the rendering context, perhaps something obtained in the first card's context was not specified as a requirement, so the second card didn't provide it because it didn't have to.

My fourth thought was that the programmer might be using the GL in a way that is undefined, giving bad results.

I'm sure I could come up with a bunch more, but without seeing the code, it's anybody's guess.

frob.
Thanks very much for all your replys, I was worried that no one would have a clue about what I was on about. Im now looking on this as a learning experience and I guess things like this are why PC developers have such large testing and quality assurance teams!! It also highlights a big advantage of consoles, i.e. if it works on one playstation it'll work on them all!!

Right so here are some photos of the problem :

Pic 1

Pic 2

And what I think/hope is the relevent code :

The texture code :
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);	glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, iWidth, iHeight, 0, GL_ALPHA, GL_UNSIGNED_BYTE, pImageData);


Probably not needed but here is the rendering code :
		float fAlpha = 1 - fLength / 1280;		fAlpha -= 0.5;		glEnable(GL_BLEND);		glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);		glDisable(GL_LIGHTING);		glDisable(GL_DEPTH_TEST);		glEnable(GL_TEXTURE_2D);		glBindTexture(GL_TEXTURE_2D, HaloTex);		glColor4f(1.0f, 0.6f, 0.6f, fAlpha);		pApp->Draw2dOrthoQuad(vPos.x + vScreenCentre.x, vPos.y + vScreenCentre.y, 50, 50, DRAW_PARAM_CENTRE);		glColor4f(0.5f, 1.0f, 0.5f, fAlpha);		pApp->Draw2dOrthoQuad(vPos2.x + vScreenCentre.x, vPos2.y + vScreenCentre.y, 70, 70, DRAW_PARAM_CENTRE);		glColor4f(0.5f, 0.5f, 1.0f, fAlpha - 0.1f);		pApp->Draw2dOrthoQuad(vPos3.x + vScreenCentre.x, vPos3.y + vScreenCentre.y, 150, 150, DRAW_PARAM_CENTRE);		glBindTexture(GL_TEXTURE_2D, SpotTex);		glColor4f(0.5f, 0.5f, 1.0f, fAlpha);		pApp->Draw2dOrthoQuad(vPos4.x + vScreenCentre.x, vPos4.y  + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);		glColor4f(0.5f, 1.0f, 0.5f, fAlpha - 0.1f);		pApp->Draw2dOrthoQuad(vPos5.x + vScreenCentre.x, vPos5.y + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);		glColor4f(1.0f, 0.5f, 0.5f, fAlpha);		pApp->Draw2dOrthoQuad(vPos6.x + vScreenCentre.x, vPos6.y + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);		glBindTexture(GL_TEXTURE_2D, SunTex);		fAlpha -= 0.1f;		glPushMatrix();						glTranslatef(vLightPos.x, vLightPos.y, 0);						glColor4f(1.0f, 0.0f, 0.0f, fAlpha);			glRotatef(fLength, 0.0f, 0.0f, 1.0f);			pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);			glColor4f(0.0f, 1.0f, 0.0f, fAlpha);			glRotatef(fLength * 0.5, 0.0f, 0.0f, 1.0f);			pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);			glColor4f(0.0f, 0.0f, 1.0f, fAlpha);			glRotatef(fLength * 0.2, 0.0f, 0.0f, 1.0f);			pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);		glPopMatrix();


Any help or suggestions very welcome and thanks again
Sounds like some messed up pixel centers. Try checking the "use alternate pixel centers" box in the ATI driver control panel...

This topic is closed to new replies.

Advertisement