Sign in to follow this  
comedypedro

OpenGL Upgraded from GFX5200 to 9800 Pro, need help!

Recommended Posts

Ive got myself a Radeon 9800 Pro to replace my old GeForce 5200, anyway in my main development project (a space combat/exploration game) I draw the sun and lense flares with 2d textured quads and blending enabled which worked fine on the old card but on the ATI I can see the part of the square that should be transparent, i.e. the circle which I want to see and a dim square around it. I'll post code if needed but my main concern is that I thought opengl was more or less standardised across different cards. I realise that they are different manufacturers and drivers etc but Im not using any fancy extensions or anything just basic blending and textured quads and theres a big difference in what each card produces. Please help, I've lost a little faith in open gl!!!

Share this post


Link to post
Share on other sites
Quote:
Original post by comedypedro
Ive got myself a Radeon 9800 Pro to replace my old GeForce 5200, anyway in my main development project (a space combat/exploration game) I draw the sun and lense flares with 2d textured quads and blending enabled which worked fine on the old card but on the ATI I can see the part of the square that should be transparent, i.e. the circle which I want to see and a dim square around it. I'll post code if needed but my main concern is that I thought opengl was more or less standardised across different cards. I realise that they are different manufacturers and drivers etc but Im not using any fancy extensions or anything just basic blending and textured quads and theres a big difference in what each card produces. Please help, I've lost a little faith in open gl!!!


It is standardized. What probably happened is that you were relying on a specific artifact of one card that is allowed by the standardization. It's like relying on a Microsoft VC++ specific or GCC specific not-quite-bug.

It is most likely your texture or rendering code is not quite right, you just didn't notice it as much on the old card.

frob.

Share this post


Link to post
Share on other sites
Thanks for your reply. I understand what you mean when you compare it to relying on MS VC to be exactly the same as the C++ standard, but the output from the two cards is very different indeed and I would have thought the opengl standardisations (if thats the right word) would be tighter. I mean if I was running an ancient TNT card or something and went up to a state of the art SLI setup I'd except some differences but the 5200 and the 9800 are same generation boards.

Im going to play around with it a bit to get it working and Im also going to keep the current build to try out on other machines/cards.

Thanks again for replying and any further comments are very welcome.

Share this post


Link to post
Share on other sites
This sounds like it could be something like having alternate texel centers turned on for one of your cards and not the other. Look in your drivers for something like that; I can't remember if it is in both ATI and NVIDIA drivers or just one of them. Having this turned on makes OpenGL sample from the center of a texel instead of the bottom-left corner and depending on your wrap mode this could end up making the texture blend with the border on one card and not on the other if you are using bilinear filtering.

Other than that I think we will need to see some code and possibly a screenshot.

Share this post


Link to post
Share on other sites
I think I know what youre talking about and I'll look into the driver settings but its the whole square I can see not just the border. I'm looking into posting a picture (can I upload a pic to GameDev or do I need to find somewhere else to host??) and then I'll post the code too.

Cheers

Share this post


Link to post
Share on other sites
What is your near/far plane set to? This might be a depth precision problem. Also you might want to try and disable the depth buffer when rendering that texture and see if that helps...

glDepthMask(false);
glDepthMask(true);

Share this post


Link to post
Share on other sites
Quote:
Original post by comedypedro
Thanks for your reply. I understand what you mean when you compare it to relying on MS VC to be exactly the same as the C++ standard, but the output from the two cards is very different indeed and I would have thought the opengl standardisations (if thats the right word) would be tighter.

That has nothing to do with standarization. If you use an API in an incorrect way or rely on undefined behaviour, then the results will be unpredictable. The standard is very clear on what is defined and what not. In the C example, this is like relying on the value of an undefined variable, and then complaining about the language being non-standarized, when your code crashes.

Either your background alpha is not completely transparent (ie. zero), and your 5200 allocated an RGBA5551 texture (essentially making it transparent by truncation), while the 9800 allocates an RGBA8 texture.

Or you have done something wrong with the texenv combine pipeline, and have some non-zero alpha leak in somewhere. Post your code.

Also, did you enable fullscreen antialiasing on the 9800 ?

Quote:
Original post by Kalidor
This sounds like it could be something like having alternate texel centers turned on for one of your cards and not the other. Look in your drivers for something like that; I can't remember if it is in both ATI and NVIDIA drivers or just one of them. Having this turned on makes OpenGL sample from the center of a texel instead of the bottom-left corner

OpenGL samples from the center by default.

Quote:
Original post by MARS_999
What is your near/far plane set to? This might be a depth precision problem.

From his problem description, this is most definitely not a depthbuffer problem.

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
That has nothing to do with standarization. If you use an API in an incorrect way or rely on undefined behaviour, then the results will be unpredictable. The standard is very clear on what is defined and what not. In the C example, this is like relying on the value of an undefined variable, and then complaining about the language being non-standarized, when your code crashes.
...
From his problem description, this is most definitely not a depthbuffer problem.


On the first bit of quote, for the OP, there are three thingsthe standard allows: DB, IB, and UB. Defined behavior, implementation defined behavior, and undefined behavior.

Obviously the bug is relying on either IB or UB. And I agree that it isn't a depth buffer problem based on the description. It is almost certainly a texture rasterization issue dealing with the alpha values.

Without seeing code, though, it's just a guess.

My first thought was exactly what you mentioned. It might be from the conversion to the card's internal color format (the 5551 conversion), either through an incorrect internal format or source image format. Those conversions and supported formats are implementation defined. I doubt this would be the cause, though, since both cards are great and handling that if the rendering contexts are similarly set.

My second thought was that it was a driver setting. Most video card drivers allow forcing certain values.

My third thought was that the app is incorrectly enumerating and obtaining the rendering context, perhaps something obtained in the first card's context was not specified as a requirement, so the second card didn't provide it because it didn't have to.

My fourth thought was that the programmer might be using the GL in a way that is undefined, giving bad results.

I'm sure I could come up with a bunch more, but without seeing the code, it's anybody's guess.

frob.

Share this post


Link to post
Share on other sites
Thanks very much for all your replys, I was worried that no one would have a clue about what I was on about. Im now looking on this as a learning experience and I guess things like this are why PC developers have such large testing and quality assurance teams!! It also highlights a big advantage of consoles, i.e. if it works on one playstation it'll work on them all!!

Right so here are some photos of the problem :

Pic 1

Pic 2

And what I think/hope is the relevent code :

The texture code :

glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, iWidth, iHeight, 0, GL_ALPHA, GL_UNSIGNED_BYTE, pImageData);



Probably not needed but here is the rendering code :

float fAlpha = 1 - fLength / 1280;
fAlpha -= 0.5;

glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);

glDisable(GL_LIGHTING);
glDisable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, HaloTex);
glColor4f(1.0f, 0.6f, 0.6f, fAlpha);
pApp->Draw2dOrthoQuad(vPos.x + vScreenCentre.x, vPos.y + vScreenCentre.y, 50, 50, DRAW_PARAM_CENTRE);
glColor4f(0.5f, 1.0f, 0.5f, fAlpha);
pApp->Draw2dOrthoQuad(vPos2.x + vScreenCentre.x, vPos2.y + vScreenCentre.y, 70, 70, DRAW_PARAM_CENTRE);
glColor4f(0.5f, 0.5f, 1.0f, fAlpha - 0.1f);
pApp->Draw2dOrthoQuad(vPos3.x + vScreenCentre.x, vPos3.y + vScreenCentre.y, 150, 150, DRAW_PARAM_CENTRE);
glBindTexture(GL_TEXTURE_2D, SpotTex);
glColor4f(0.5f, 0.5f, 1.0f, fAlpha);
pApp->Draw2dOrthoQuad(vPos4.x + vScreenCentre.x, vPos4.y + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);
glColor4f(0.5f, 1.0f, 0.5f, fAlpha - 0.1f);
pApp->Draw2dOrthoQuad(vPos5.x + vScreenCentre.x, vPos5.y + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);
glColor4f(1.0f, 0.5f, 0.5f, fAlpha);
pApp->Draw2dOrthoQuad(vPos6.x + vScreenCentre.x, vPos6.y + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);

glBindTexture(GL_TEXTURE_2D, SunTex);

fAlpha -= 0.1f;

glPushMatrix();

glTranslatef(vLightPos.x, vLightPos.y, 0);

glColor4f(1.0f, 0.0f, 0.0f, fAlpha);
glRotatef(fLength, 0.0f, 0.0f, 1.0f);
pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);
glColor4f(0.0f, 1.0f, 0.0f, fAlpha);
glRotatef(fLength * 0.5, 0.0f, 0.0f, 1.0f);
pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);
glColor4f(0.0f, 0.0f, 1.0f, fAlpha);
glRotatef(fLength * 0.2, 0.0f, 0.0f, 1.0f);
pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);

glPopMatrix();



Any help or suggestions very welcome and thanks again

Share this post


Link to post
Share on other sites
Hmm ... glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);

Are you sure your framebuffer has an alpha component? What value do you clear it to? (And why not just use GL_ONE instead?)

Share this post


Link to post
Share on other sites
OK its been suggested on opengl.org that I replace

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, .......

with

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA8, .....

This forces lazy-ass ATI cards to use 8 bit precision, I have a hunch that this will solve the problem, I'll post again when I check it.

Quote:
Original post by Fingers_
Hmm ... glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);

Are you sure your framebuffer has an alpha component? What value do you clear it to? (And why not just use GL_ONE instead?)


Youre probably right, I added those 2 lines from memory after I cut and paste the rest of the code. My brain is pickled from this and from college, sorry!

Share this post


Link to post
Share on other sites
Quote:
Original post by comedypedro
OK its been suggested on opengl.org that I replace

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, .......

with

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA8, .....

This forces lazy-ass ATI cards to use 8 bit precision, I have a hunch that this will solve the problem, I'll post again when I check it.


This is not really 'lazy-ass ATI cards', this is a case of the driver trying to maximise speed over quality. If you need a certain precision that you should ask for it, dont assume the driver is always going to do what you want, I'm pretty certain under the spec the driver is allowed todo what it like wrt certain texture formats when you arent explicate about it.

Share this post


Link to post
Share on other sites
Yeah I know the drivers doing what it thinks is best I was just joking!

Although scarificing quality to get something done quickly could be a definition of laziness..... ;)

Do you make an interesting point though I'll experiment with the quality settings in the driver and see what happens

Share this post


Link to post
Share on other sites
Just a suggestion, but try using clamp to border when creating your texture. I had a problem similar to this a wile back, and that seemed to solve it.

Share this post


Link to post
Share on other sites
Why are you using the alpha buffer to do a lens-flare effect ?

Do you realize that the ALPHA internal format in your textures, has nothing to do with the alpha_dst blend mode ?

Y.

Share this post


Link to post
Share on other sites
Quote:
Original post by Ysaneya
Why are you using the alpha buffer to do a lens-flare effect ?

Do you realize that the ALPHA internal format in your textures, has nothing to do with the alpha_dst blend mode ?

Y.


I explained that above, I added that line from memory after I cut and paste the code, I was stressed at the time and got it wrong, the destination factor should be GL_ONE

Share this post


Link to post
Share on other sites
If you load a rgba texture and store you transparency in your alpha channel than use this code to do your transparency...


glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);


granted turn on your alpha test and blending and remember to turn it off...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627748
    • Total Posts
      2978911
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now