• Advertisement
Sign in to follow this  

OpenGL Upgraded from GFX5200 to 9800 Pro, need help!

This topic is 4424 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ive got myself a Radeon 9800 Pro to replace my old GeForce 5200, anyway in my main development project (a space combat/exploration game) I draw the sun and lense flares with 2d textured quads and blending enabled which worked fine on the old card but on the ATI I can see the part of the square that should be transparent, i.e. the circle which I want to see and a dim square around it. I'll post code if needed but my main concern is that I thought opengl was more or less standardised across different cards. I realise that they are different manufacturers and drivers etc but Im not using any fancy extensions or anything just basic blending and textured quads and theres a big difference in what each card produces. Please help, I've lost a little faith in open gl!!!

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by comedypedro
Ive got myself a Radeon 9800 Pro to replace my old GeForce 5200, anyway in my main development project (a space combat/exploration game) I draw the sun and lense flares with 2d textured quads and blending enabled which worked fine on the old card but on the ATI I can see the part of the square that should be transparent, i.e. the circle which I want to see and a dim square around it. I'll post code if needed but my main concern is that I thought opengl was more or less standardised across different cards. I realise that they are different manufacturers and drivers etc but Im not using any fancy extensions or anything just basic blending and textured quads and theres a big difference in what each card produces. Please help, I've lost a little faith in open gl!!!


It is standardized. What probably happened is that you were relying on a specific artifact of one card that is allowed by the standardization. It's like relying on a Microsoft VC++ specific or GCC specific not-quite-bug.

It is most likely your texture or rendering code is not quite right, you just didn't notice it as much on the old card.

frob.

Share this post


Link to post
Share on other sites
Thanks for your reply. I understand what you mean when you compare it to relying on MS VC to be exactly the same as the C++ standard, but the output from the two cards is very different indeed and I would have thought the opengl standardisations (if thats the right word) would be tighter. I mean if I was running an ancient TNT card or something and went up to a state of the art SLI setup I'd except some differences but the 5200 and the 9800 are same generation boards.

Im going to play around with it a bit to get it working and Im also going to keep the current build to try out on other machines/cards.

Thanks again for replying and any further comments are very welcome.

Share this post


Link to post
Share on other sites
This sounds like it could be something like having alternate texel centers turned on for one of your cards and not the other. Look in your drivers for something like that; I can't remember if it is in both ATI and NVIDIA drivers or just one of them. Having this turned on makes OpenGL sample from the center of a texel instead of the bottom-left corner and depending on your wrap mode this could end up making the texture blend with the border on one card and not on the other if you are using bilinear filtering.

Other than that I think we will need to see some code and possibly a screenshot.

Share this post


Link to post
Share on other sites
I think I know what youre talking about and I'll look into the driver settings but its the whole square I can see not just the border. I'm looking into posting a picture (can I upload a pic to GameDev or do I need to find somewhere else to host??) and then I'll post the code too.

Cheers

Share this post


Link to post
Share on other sites
What is your near/far plane set to? This might be a depth precision problem. Also you might want to try and disable the depth buffer when rendering that texture and see if that helps...

glDepthMask(false);
glDepthMask(true);

Share this post


Link to post
Share on other sites
Quote:
Original post by comedypedro
Thanks for your reply. I understand what you mean when you compare it to relying on MS VC to be exactly the same as the C++ standard, but the output from the two cards is very different indeed and I would have thought the opengl standardisations (if thats the right word) would be tighter.

That has nothing to do with standarization. If you use an API in an incorrect way or rely on undefined behaviour, then the results will be unpredictable. The standard is very clear on what is defined and what not. In the C example, this is like relying on the value of an undefined variable, and then complaining about the language being non-standarized, when your code crashes.

Either your background alpha is not completely transparent (ie. zero), and your 5200 allocated an RGBA5551 texture (essentially making it transparent by truncation), while the 9800 allocates an RGBA8 texture.

Or you have done something wrong with the texenv combine pipeline, and have some non-zero alpha leak in somewhere. Post your code.

Also, did you enable fullscreen antialiasing on the 9800 ?

Quote:
Original post by Kalidor
This sounds like it could be something like having alternate texel centers turned on for one of your cards and not the other. Look in your drivers for something like that; I can't remember if it is in both ATI and NVIDIA drivers or just one of them. Having this turned on makes OpenGL sample from the center of a texel instead of the bottom-left corner

OpenGL samples from the center by default.

Quote:
Original post by MARS_999
What is your near/far plane set to? This might be a depth precision problem.

From his problem description, this is most definitely not a depthbuffer problem.

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
That has nothing to do with standarization. If you use an API in an incorrect way or rely on undefined behaviour, then the results will be unpredictable. The standard is very clear on what is defined and what not. In the C example, this is like relying on the value of an undefined variable, and then complaining about the language being non-standarized, when your code crashes.
...
From his problem description, this is most definitely not a depthbuffer problem.


On the first bit of quote, for the OP, there are three thingsthe standard allows: DB, IB, and UB. Defined behavior, implementation defined behavior, and undefined behavior.

Obviously the bug is relying on either IB or UB. And I agree that it isn't a depth buffer problem based on the description. It is almost certainly a texture rasterization issue dealing with the alpha values.

Without seeing code, though, it's just a guess.

My first thought was exactly what you mentioned. It might be from the conversion to the card's internal color format (the 5551 conversion), either through an incorrect internal format or source image format. Those conversions and supported formats are implementation defined. I doubt this would be the cause, though, since both cards are great and handling that if the rendering contexts are similarly set.

My second thought was that it was a driver setting. Most video card drivers allow forcing certain values.

My third thought was that the app is incorrectly enumerating and obtaining the rendering context, perhaps something obtained in the first card's context was not specified as a requirement, so the second card didn't provide it because it didn't have to.

My fourth thought was that the programmer might be using the GL in a way that is undefined, giving bad results.

I'm sure I could come up with a bunch more, but without seeing the code, it's anybody's guess.

frob.

Share this post


Link to post
Share on other sites
Thanks very much for all your replys, I was worried that no one would have a clue about what I was on about. Im now looking on this as a learning experience and I guess things like this are why PC developers have such large testing and quality assurance teams!! It also highlights a big advantage of consoles, i.e. if it works on one playstation it'll work on them all!!

Right so here are some photos of the problem :

Pic 1

Pic 2

And what I think/hope is the relevent code :

The texture code :

glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, iWidth, iHeight, 0, GL_ALPHA, GL_UNSIGNED_BYTE, pImageData);



Probably not needed but here is the rendering code :

float fAlpha = 1 - fLength / 1280;
fAlpha -= 0.5;

glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);

glDisable(GL_LIGHTING);
glDisable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, HaloTex);
glColor4f(1.0f, 0.6f, 0.6f, fAlpha);
pApp->Draw2dOrthoQuad(vPos.x + vScreenCentre.x, vPos.y + vScreenCentre.y, 50, 50, DRAW_PARAM_CENTRE);
glColor4f(0.5f, 1.0f, 0.5f, fAlpha);
pApp->Draw2dOrthoQuad(vPos2.x + vScreenCentre.x, vPos2.y + vScreenCentre.y, 70, 70, DRAW_PARAM_CENTRE);
glColor4f(0.5f, 0.5f, 1.0f, fAlpha - 0.1f);
pApp->Draw2dOrthoQuad(vPos3.x + vScreenCentre.x, vPos3.y + vScreenCentre.y, 150, 150, DRAW_PARAM_CENTRE);
glBindTexture(GL_TEXTURE_2D, SpotTex);
glColor4f(0.5f, 0.5f, 1.0f, fAlpha);
pApp->Draw2dOrthoQuad(vPos4.x + vScreenCentre.x, vPos4.y + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);
glColor4f(0.5f, 1.0f, 0.5f, fAlpha - 0.1f);
pApp->Draw2dOrthoQuad(vPos5.x + vScreenCentre.x, vPos5.y + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);
glColor4f(1.0f, 0.5f, 0.5f, fAlpha);
pApp->Draw2dOrthoQuad(vPos6.x + vScreenCentre.x, vPos6.y + vScreenCentre.y, 20, 20, DRAW_PARAM_CENTRE);

glBindTexture(GL_TEXTURE_2D, SunTex);

fAlpha -= 0.1f;

glPushMatrix();

glTranslatef(vLightPos.x, vLightPos.y, 0);

glColor4f(1.0f, 0.0f, 0.0f, fAlpha);
glRotatef(fLength, 0.0f, 0.0f, 1.0f);
pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);
glColor4f(0.0f, 1.0f, 0.0f, fAlpha);
glRotatef(fLength * 0.5, 0.0f, 0.0f, 1.0f);
pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);
glColor4f(0.0f, 0.0f, 1.0f, fAlpha);
glRotatef(fLength * 0.2, 0.0f, 0.0f, 1.0f);
pApp->Draw2dOrthoQuad(0, 0, 600 - fLength, 600 - fLength, DRAW_PARAM_CENTRE);

glPopMatrix();



Any help or suggestions very welcome and thanks again

Share this post


Link to post
Share on other sites
Sounds like some messed up pixel centers. Try checking the "use alternate pixel centers" box in the ATI driver control panel...

Share this post


Link to post
Share on other sites
Hmm ... glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);

Are you sure your framebuffer has an alpha component? What value do you clear it to? (And why not just use GL_ONE instead?)

Share this post


Link to post
Share on other sites
OK its been suggested on opengl.org that I replace

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, .......

with

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA8, .....

This forces lazy-ass ATI cards to use 8 bit precision, I have a hunch that this will solve the problem, I'll post again when I check it.

Quote:
Original post by Fingers_
Hmm ... glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);

Are you sure your framebuffer has an alpha component? What value do you clear it to? (And why not just use GL_ONE instead?)


Youre probably right, I added those 2 lines from memory after I cut and paste the rest of the code. My brain is pickled from this and from college, sorry!

Share this post


Link to post
Share on other sites
Quote:
Original post by comedypedro
OK its been suggested on opengl.org that I replace

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, .......

with

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA8, .....

This forces lazy-ass ATI cards to use 8 bit precision, I have a hunch that this will solve the problem, I'll post again when I check it.


This is not really 'lazy-ass ATI cards', this is a case of the driver trying to maximise speed over quality. If you need a certain precision that you should ask for it, dont assume the driver is always going to do what you want, I'm pretty certain under the spec the driver is allowed todo what it like wrt certain texture formats when you arent explicate about it.

Share this post


Link to post
Share on other sites
Yeah I know the drivers doing what it thinks is best I was just joking!

Although scarificing quality to get something done quickly could be a definition of laziness..... ;)

Do you make an interesting point though I'll experiment with the quality settings in the driver and see what happens

Share this post


Link to post
Share on other sites
Just a suggestion, but try using clamp to border when creating your texture. I had a problem similar to this a wile back, and that seemed to solve it.

Share this post


Link to post
Share on other sites
Why are you using the alpha buffer to do a lens-flare effect ?

Do you realize that the ALPHA internal format in your textures, has nothing to do with the alpha_dst blend mode ?

Y.

Share this post


Link to post
Share on other sites
Quote:
Original post by Ysaneya
Why are you using the alpha buffer to do a lens-flare effect ?

Do you realize that the ALPHA internal format in your textures, has nothing to do with the alpha_dst blend mode ?

Y.


I explained that above, I added that line from memory after I cut and paste the code, I was stressed at the time and got it wrong, the destination factor should be GL_ONE

Share this post


Link to post
Share on other sites
If you load a rgba texture and store you transparency in your alpha channel than use this code to do your transparency...


glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);


granted turn on your alpha test and blending and remember to turn it off...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
  • Advertisement