Sign in to follow this  
bencelot

OpenGL Whatever happened to GLEE_ARB_imaging?

Recommended Posts

Hey guys! I've got 2 questions: 1) I'm using GLEE to check for extensions in my OpenGL program. Lately though I've noticed a lot of players playing my game are getting problems and it turns out that it's because GLEE_ARB_imaging is false.. even though they have OpenGL version 2.0 and higher. Surely this cannot be! Has this extension stopped being supported or something? If so what should I be checking for instead? These are the extensions my game uses: GLEE_ARB_vertex_buffer_object GLEE_EXT_multi_draw_array GLEE_ARB_multitexture GLEE_ARB_imaging GLEE_ARB_shader_objects GLEE_ARB_fragment_shader GLEE_ARB_shading_language_100 As mentioned before GLEE_ARB_imaging is often returning false when I suspect it shouldn't be (because they have OpenGL version 2.0 and higher). I need all 7 of those extensions to do my special shadow effect.. so what would be the best way to check for them? Can I check for something like GLEE_VERSION_2_0 instead? Given those extensions what would be the lowest version of OpenGL I could check against and KNOW it'll support it. 2) Next question!! I've written a shader that works just fine on my computer. But it's not working as expected on others. The shader compiles on both computers too yet shows a different result. In both these cases all 7 extensions above were met. I'm doing some research on it now but thought I'd check incase someone here knew it off the top of their head, but is there anything wrong with this code?
  //THE SHADER CODE
  shadowCode = 
    "uniform sampler2D baseTexture;"
    "uniform sampler2D shadowTexture;"
    "uniform sampler1D shadowHeightTexture;"
    "uniform bool mapMode;"
    "uniform bool fadeMode;"
    "uniform float contrast;"

    "void main() {"

    "  vec4 baseColour = texture2D(baseTexture, gl_TexCoord[0].st); "

    "  float shadowIntensity = texture2D(shadowTexture, gl_TexCoord[1].st).a; "
    "  if(mapMode || fadeMode) { "
    "    if(gl_TexCoord[2].s < 0.5) { "
    "      shadowIntensity = min(1.0f, shadowIntensity + (1.0-texture1D(shadowHeightTexture,gl_TexCoord[2].s).r) ); "
    "    } else { "
    "      shadowIntensity = max(shadowIntensity, (1.0-texture1D(shadowHeightTexture,gl_TexCoord[2].s).r) ); "
    "    } "
    "    shadowIntensity *= shadowIntensity; "
    "  } "

    "  gl_FragColor = gl_Color*baseColour; "

    "  float lumin = (gl_FragColor.r*0.3 + gl_FragColor.g*0.59 + gl_FragColor.b*0.11); "
    "  gl_FragColor.gb *= (contrast*((lumin - gl_FragColor.gb)/2.0) + 1); "
    "  gl_FragColor.rgb *= (lumin*contrast + 0.5*(2-contrast) + 0.2*contrast); "


    "  if(mapMode) { "
    "    gl_FragColor.rgb *= (1.0-shadowIntensity/3.0); "
    "  } "
    
    "  if(fadeMode) { "
    "    gl_FragColor.a *= 1.0 - shadowIntensity; "
    "  } "

    "}";

I'm thinking it could be like different versions of GLSL will interpret the code differently and while it'll still compile behave in a different way. Much like different browsers treat the same HTML/CSS differently. Any ideas? Cheers! Ben.

Share this post


Link to post
Share on other sites
1)

(a) ARB_imaging is not widely supported; it never was. You will find it in workstation cards (Quadro/FireGL) and that's it.

(b) ARB_imaging was deprecated in OpenGL 3.0 and removed in OpenGL 3.1+. It might be present in a "compatible" OpenGL 3.2 context, but it most likely won't be.

(c) The recommended approach is to duplicate ARB_imaging functionality using supported OpenGL facilities. Most of this extension is trivial to implement. Some parts are not (e.g. convolution filters), but these parts are more or less useless anyway.


2) You are using non-standard GLSL features. For example:

(lumin - gl_FragColor.gb)/2.0) + 1


Nvidia drivers will accept code such as this, but the code itself is not correct (it should be "+ 1.0", not "+ 1").

There is a way to turn on "standards mode" on Nvidia cards - use it. This will force you to right correct GLSL code and will improve compatibility.

Edit: your code above is also incorrect (you are assigning a float to a vec3).

[Edited by - Fiddler on November 8, 2009 11:18:44 AM]

Share this post


Link to post
Share on other sites
Thanks Fiddler,

Do you know how to enable this standards mode? Google isn't helping me. Do I find it through my NVIDIA control panel or is it something that I have to define up the top of my shader code?

Also when you say assigning a float to a vec3 are you referring to this line?

gl_FragColor.rgb *= (1.0-shadowIntensity/3.0);

if so how would I write it? Would this work?

gl_FragColor = vec4(gl_FragColor.rgb*(1.0-shadowIntensity/3.0), 1);

Or is it something along the lines of not being able to read from gl_FragColor on some gfx cards, in which case I should be creating a new vec4 to work with and then just assigning this to gl_FragColor at the very end.

This all compiles and works fine on my computer, so it's hard to test where the bug is :s

Thanks!



Share this post


Link to post
Share on other sites
Update!

I think I found what you mean by running in standards mode.

One at a time I put:
#version 100\n
#version 110\n
#version 120\n
#version 130\n

up the top of the shader to check for any errors that came up and fixed them all. I also rewrote the shader as such:



//THE SHADER CODE
shadowCode =
"#version 130\n"
"uniform sampler2D baseTexture;"
"uniform sampler2D shadowTexture;"
"uniform sampler1D shadowHeightTexture;"
"uniform bool mapMode;"
"uniform bool fadeMode;"
"uniform float contrast;"

"void main() {"

" vec4 baseColour = texture2D(baseTexture, gl_TexCoord[0].st); "

" float shadowIntensity = texture2D(shadowTexture, gl_TexCoord[1].st).a; "
" if(mapMode || fadeMode) { "
" if(gl_TexCoord[2].s < 0.5) { "
" shadowIntensity = min(1.0, shadowIntensity + (1.0-texture1D(shadowHeightTexture,gl_TexCoord[2].s).r) ); "
" } else { "
" shadowIntensity = max(shadowIntensity, (1.0-texture1D(shadowHeightTexture,gl_TexCoord[2].s).r) ); "
" } "
" shadowIntensity *= shadowIntensity; "
" } "

" baseColour *= gl_Color; "

" float lumin = (baseColour.r*0.3 + baseColour.g*0.59 + baseColour.b*0.11); "
" baseColour.gb *= (contrast*((lumin - baseColour.gb)/2.0) + 1.0); "
" baseColour.rgb *= (lumin*contrast + 0.5*(2.0-contrast) + 0.2*contrast); "


" if(mapMode) { "
" baseColour.rgb *= (1.0-shadowIntensity/3.0); "
" } "

" if(fadeMode) { "
" baseColour.a *= (1.0 - shadowIntensity); "
" } "

" gl_FragColor = baseColour; "

"}";




Which I think is better, though I'm yet to test it on an ATI card.

I did get one weird thing however when I tried adding #version 140\n up the top. I got an error message saying:

"0(2) : error C7533: global variable gl_FragColor is deprecated after version 120"

which is kinda weird considering that #version 130 worked. Any ideas why this might be?

Also should I leave the "#version 130\n" in there for release, or is it just used for testing purposes?

Cheers!

[Edited by - bencelot on November 8, 2009 7:49:59 PM]

Share this post


Link to post
Share on other sites
gl_FragColor is considered deprecated starting with OpenGL 3.0. You are supposed to provide and bind your own output variable(s) - check the specs on how to do this.

In general, I'd suggest using the earliest #version that can compile your shader (if only to improve compatibility). #version 130 and higher require newer new graphics cards and drivers to run and generally change the shader code syntax ("in" vs "attribute", explicit output variables etc), so if you don't need any of their new features it would be simpler to avoid them for now.

Share this post


Link to post
Share on other sites
Ahh ok.

Well it compiles just fine using "#version 100\n" so I guess I should use that? My only concern is that maybe this older version runs slower or will one day become depreciated?

If I don't declare any version at all will the compiler just use the latest version available to it or will it check older version until it finds one that's compatible?

Share this post


Link to post
Share on other sites
Oh, and 1 more question :)

This is in relation to the original question. The reason I was checking for GLEE_ARB_imaging is because I need to use glBlendEquation(GL_MAX). If ARB_imaging is no longer supported what can I do?

1) Can I simply check for glBlendEquation some other way?

or 2) Will I have to duplicate this functionality using supported functionality. If so any ideas?

Cheers

Share this post


Link to post
Share on other sites
glBlendEquation is supported if OpenGL version is 1.2 or higher. It is also available in forward compatible GL 3.x contexts (i.e. it's not deprecated, even if ARB_imaging is).

In other words, simply check if version >= 1.2. If it is, you can use this function at will.

There shouldn't be a performance difference between adding "#version 100" or not. I'd suggest checking the OpenGL shading language specification to see what this declaration really does.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this