Sign in to follow this  
comedypedro

OpenGL Upgraded from GFX5200 to 9800 Pro, need help!

Recommended Posts

comedypedro    134
Ive got myself a Radeon 9800 Pro to replace my old GeForce 5200, anyway in my main development project (a space combat/exploration game) I draw the sun and lense flares with 2d textured quads and blending enabled which worked fine on the old card but on the ATI I can see the part of the square that should be transparent, i.e. the circle which I want to see and a dim square around it. I'll post code if needed but my main concern is that I thought opengl was more or less standardised across different cards. I realise that they are different manufacturers and drivers etc but Im not using any fancy extensions or anything just basic blending and textured quads and theres a big difference in what each card produces. Please help, I've lost a little faith in open gl!!!

Share this post


Link to post
Share on other sites
frob    44908
Quote:
Original post by comedypedro
Ive got myself a Radeon 9800 Pro to replace my old GeForce 5200, anyway in my main development project (a space combat/exploration game) I draw the sun and lense flares with 2d textured quads and blending enabled which worked fine on the old card but on the ATI I can see the part of the square that should be transparent, i.e. the circle which I want to see and a dim square around it. I'll post code if needed but my main concern is that I thought opengl was more or less standardised across different cards. I realise that they are different manufacturers and drivers etc but Im not using any fancy extensions or anything just basic blending and textured quads and theres a big difference in what each card produces. Please help, I've lost a little faith in open gl!!!


It is standardized. What probably happened is that you were relying on a specific artifact of one card that is allowed by the standardization. It's like relying on a Microsoft VC++ specific or GCC specific not-quite-bug.

It is most likely your texture or rendering code is not quite right, you just didn't notice it as much on the old card.

frob.

Share this post


Link to post
Share on other sites
comedypedro    134
Thanks for your reply. I understand what you mean when you compare it to relying on MS VC to be exactly the same as the C++ standard, but the output from the two cards is very different indeed and I would have thought the opengl standardisations (if thats the right word) would be tighter. I mean if I was running an ancient TNT card or something and went up to a state of the art SLI setup I'd except some differences but the 5200 and the 9800 are same generation boards.

Im going to play around with it a bit to get it working and Im also going to keep the current build to try out on other machines/cards.

Thanks again for replying and any further comments are very welcome.

Share this post


Link to post
Share on other sites
Kalidor    1087
This sounds like it could be something like having alternate texel centers turned on for one of your cards and not the other. Look in your drivers for something like that; I can't remember if it is in both ATI and NVIDIA drivers or just one of them. Having this turned on makes OpenGL sample from the center of a texel instead of the bottom-left corner and depending on your wrap mode this could end up making the texture blend with the border on one card and not on the other if you are using bilinear filtering.

Other than that I think we will need to see some code and possibly a screenshot.

Share this post


Link to post
Share on other sites
comedypedro    134
I think I know what youre talking about and I'll look into the driver settings but its the whole square I can see not just the border. I'm looking into posting a picture (can I upload a pic to GameDev or do I need to find somewhere else to host??) and then I'll post the code too.

Cheers

Share this post


Link to post
Share on other sites
MARS_999    1627
What is your near/far plane set to? This might be a depth precision problem. Also you might want to try and disable the depth buffer when rendering that texture and see if that helps...

glDepthMask(false);
glDepthMask(true);

Share this post


Link to post
Share on other sites
Yann L    1802
Quote:
Original post by comedypedro
Thanks for your reply. I understand what you mean when you compare it to relying on MS VC to be exactly the same as the C++ standard, but the output from the two cards is very different indeed and I would have thought the opengl standardisations (if thats the right word) would be tighter.

That has nothing to do with standarization. If you use an API in an incorrect way or rely on undefined behaviour, then the results will be unpredictable. The standard is very clear on what is defined and what not. In the C example, this is like relying on the value of an undefined variable, and then complaining about the language being non-standarized, when your code crashes.

Either your background alpha is not completely transparent (ie. zero), and your 5200 allocated an RGBA5551 texture (essentially making it transparent by truncation), while the 9800 allocates an RGBA8 texture.

Or you have done something wrong with the texenv combine pipeline, and have some non-zero alpha leak in somewhere. Post your code.

Also, did you enable fullscreen antialiasing on the 9800 ?

Quote:
Original post by Kalidor
This sounds like it could be something like having alternate texel centers turned on for one of your cards and not the other. Look in your drivers for something like that; I can't remember if it is in both ATI and NVIDIA drivers or just one of them. Having this turned on makes OpenGL sample from the center of a texel instead of the bottom-left corner

OpenGL samples from the center by default.

Quote:
Original post by MARS_999
What is your near/far plane set to? This might be a depth precision problem.

From his problem description, this is most definitely not a depthbuffer problem.

Share this post


Link to post
Share on other sites
frob    44908
Quote:
Original post by Yann L
That has nothing to do with standarization. If you use an API in an incorrect way or rely on undefined behaviour, then the results will be unpredictable. The standard is very clear on what is defined and what not. In the C example, this is like relying on the value of an undefined variable, and then complaining about the language being non-standarized, when your code crashes.
...
From his problem description, this is most definitely not a depthbuffer problem.


On the first bit of quote, for the OP, there are three thingsthe standard allows: DB, IB, and UB. Defined behavior, implementation defined behavior, and undefined behavior.

Obviously the bug is relying on either IB or UB. And I agree that it isn't a depth buffer problem based on the description. It is almost certainly a texture rasterization issue dealing with the alpha values.

Without seeing code, though, it's just a guess.

My first thought was exactly what you mentioned. It might be from the conversion to the card's internal color format (the 5551 conversion), either through an incorrect internal format or source image format. Those conversions and supported formats are implementation defined. I doubt this would be the cause, though, since both cards are great and handling that if the rendering contexts are similarly set.

My second thought was that it was a driver setting. Most video card drivers allow forcing certain values.

My third thought was that the app is incorrectly enumerating and obtaining the rendering context, perhaps something obtained in the first card's context was not specified as a requirement, so the second card didn't provide it because it didn't have to.

My fourth thought was that the programmer might be using the GL in a way that is undefined, giving bad results.

I'm sure I could come up with a bunch more, but without seeing the code, it's anybody's guess.

frob.

Share this post


Link to post
Share on other sites
comedypedro    134
Thanks very much for all your replys, I was worried that no one would have a clue about what I was on about. Im now looking on this as a learning experience and I guess things like this are why PC developers have such large testing and quality assurance teams!! It also highlights a big advantage of consoles, i.e. if it works on one playstation it'll work on them all!!

Right so here are some photos of the problem :

Pic 1

Pic 2

And what I think/hope is the relevent code :

The texture code :

glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, iWidth, iHeight, 0, GL_ALPHA, GL_UNSIGNED_BYTE, pImageData);



Probably not needed but here is the rendering code :

float fAlpha = 1 - fLength / 1280;
fAlpha -= 0.5;

glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);

glDisable(GL_LIGHTING);
glDisable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, HaloTex);
glColor4f(1.0f, 0.6f, 0.6f, fAlpha);
pApp->Draw2dOrthoQuad(vPos.x + vScreenCentre.x, vPos.y + vScreenCentre.y, 50, 50, DRAW_PARAM_CENTRE);
glColor4f(0.5f, 1.0f, 0.5f, fAlpha);
pApp->Draw2dOrthoQuad(vPos2.x + vScreenCentre.x, vPos2.y + vScreenCentre.y, 70, 70, DRAW_PARAM_CENTRE);
glColor4f(0.5f, 0.5f, 1.0f, fAlpha - 0.1f);
pApp->Draw2dOrthoQuad(vPos3.x + vScreenCentre.x, vPos3.y + vScreenCentre.y, 150, 150, DRAW_PARAM_CENTRE);
glBindTexture(GL_TEXTURE_2D, SpotTex);
glColor4f(0.5f, 0.5f, 1.0f, fAlpha);
pApp->Draw2dOrthoQuad(vPos4.x + vScreenCentre.x, vPos4.y + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);
glColor4f(0.5f, 1.0f, 0.5f, fAlpha - 0.1f);
pApp->Draw2dOrthoQuad(vPos5.x + vScreenCentre.x, vPos5.y + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);
glColor4f(1.0f, 0.5f, 0.5f, fAlpha);
pApp->Draw2dOrthoQuad(vPos6.x + vScreenCentre.x, vPos6.y + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);

glBindTexture(GL_TEXTURE_2D, SunTex);

fAlpha -= 0.1f;

glPushMatrix();

glTranslatef(vLightPos.x, vLightPos.y, 0);

glColor4f(1.0f, 0.0f, 0.0f, fAlpha);
glRotatef(fLength, 0.0f, 0.0f, 1.0f);
pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);
glColor4f(0.0f, 1.0f, 0.0f, fAlpha);
glRotatef(fLength * 0.5, 0.0f, 0.0f, 1.0f);
pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);
glColor4f(0.0f, 0.0f, 1.0f, fAlpha);
glRotatef(fLength * 0.2, 0.0f, 0.0f, 1.0f);
pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);

glPopMatrix();



Any help or suggestions very welcome and thanks again

Share this post


Link to post
Share on other sites
Fingers_    410
Hmm ... glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);

Are you sure your framebuffer has an alpha component? What value do you clear it to? (And why not just use GL_ONE instead?)

Share this post


Link to post
Share on other sites
comedypedro    134
OK its been suggested on opengl.org that I replace

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, .......

with

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA8, .....

This forces lazy-ass ATI cards to use 8 bit precision, I have a hunch that this will solve the problem, I'll post again when I check it.

Quote:
Original post by Fingers_
Hmm ... glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);

Are you sure your framebuffer has an alpha component? What value do you clear it to? (And why not just use GL_ONE instead?)


Youre probably right, I added those 2 lines from memory after I cut and paste the rest of the code. My brain is pickled from this and from college, sorry!

Share this post


Link to post
Share on other sites
_the_phantom_    11250
Quote:
Original post by comedypedro
OK its been suggested on opengl.org that I replace

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, .......

with

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA8, .....

This forces lazy-ass ATI cards to use 8 bit precision, I have a hunch that this will solve the problem, I'll post again when I check it.


This is not really 'lazy-ass ATI cards', this is a case of the driver trying to maximise speed over quality. If you need a certain precision that you should ask for it, dont assume the driver is always going to do what you want, I'm pretty certain under the spec the driver is allowed todo what it like wrt certain texture formats when you arent explicate about it.

Share this post


Link to post
Share on other sites
comedypedro    134
Yeah I know the drivers doing what it thinks is best I was just joking!

Although scarificing quality to get something done quickly could be a definition of laziness..... ;)

Do you make an interesting point though I'll experiment with the quality settings in the driver and see what happens

Share this post


Link to post
Share on other sites
Lopez    122
Just a suggestion, but try using clamp to border when creating your texture. I had a problem similar to this a wile back, and that seemed to solve it.

Share this post


Link to post
Share on other sites
Ysaneya    1383
Why are you using the alpha buffer to do a lens-flare effect ?

Do you realize that the ALPHA internal format in your textures, has nothing to do with the alpha_dst blend mode ?

Y.

Share this post


Link to post
Share on other sites
comedypedro    134
Quote:
Original post by Ysaneya
Why are you using the alpha buffer to do a lens-flare effect ?

Do you realize that the ALPHA internal format in your textures, has nothing to do with the alpha_dst blend mode ?

Y.


I explained that above, I added that line from memory after I cut and paste the code, I was stressed at the time and got it wrong, the destination factor should be GL_ONE

Share this post


Link to post
Share on other sites
MARS_999    1627
If you load a rgba texture and store you transparency in your alpha channel than use this code to do your transparency...


glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);


granted turn on your alpha test and blending and remember to turn it off...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
  • Popular Now