Sign in to follow this  
bencelot

OpenGL Whatever happened to GLEE_ARB_imaging?

Recommended Posts

bencelot    204
Hey guys! I've got 2 questions: 1) I'm using GLEE to check for extensions in my OpenGL program. Lately though I've noticed a lot of players playing my game are getting problems and it turns out that it's because GLEE_ARB_imaging is false.. even though they have OpenGL version 2.0 and higher. Surely this cannot be! Has this extension stopped being supported or something? If so what should I be checking for instead? These are the extensions my game uses: GLEE_ARB_vertex_buffer_object GLEE_EXT_multi_draw_array GLEE_ARB_multitexture GLEE_ARB_imaging GLEE_ARB_shader_objects GLEE_ARB_fragment_shader GLEE_ARB_shading_language_100 As mentioned before GLEE_ARB_imaging is often returning false when I suspect it shouldn't be (because they have OpenGL version 2.0 and higher). I need all 7 of those extensions to do my special shadow effect.. so what would be the best way to check for them? Can I check for something like GLEE_VERSION_2_0 instead? Given those extensions what would be the lowest version of OpenGL I could check against and KNOW it'll support it. 2) Next question!! I've written a shader that works just fine on my computer. But it's not working as expected on others. The shader compiles on both computers too yet shows a different result. In both these cases all 7 extensions above were met. I'm doing some research on it now but thought I'd check incase someone here knew it off the top of their head, but is there anything wrong with this code?
  //THE SHADER CODE
  shadowCode = 
    "uniform sampler2D baseTexture;"
    "uniform sampler2D shadowTexture;"
    "uniform sampler1D shadowHeightTexture;"
    "uniform bool mapMode;"
    "uniform bool fadeMode;"
    "uniform float contrast;"

    "void main() {"

    "  vec4 baseColour = texture2D(baseTexture, gl_TexCoord[0].st); "

    "  float shadowIntensity = texture2D(shadowTexture, gl_TexCoord[1].st).a; "
    "  if(mapMode || fadeMode) { "
    "    if(gl_TexCoord[2].s < 0.5) { "
    "      shadowIntensity = min(1.0f, shadowIntensity + (1.0-texture1D(shadowHeightTexture,gl_TexCoord[2].s).r) ); "
    "    } else { "
    "      shadowIntensity = max(shadowIntensity, (1.0-texture1D(shadowHeightTexture,gl_TexCoord[2].s).r) ); "
    "    } "
    "    shadowIntensity *= shadowIntensity; "
    "  } "

    "  gl_FragColor = gl_Color*baseColour; "

    "  float lumin = (gl_FragColor.r*0.3 + gl_FragColor.g*0.59 + gl_FragColor.b*0.11); "
    "  gl_FragColor.gb *= (contrast*((lumin - gl_FragColor.gb)/2.0) + 1); "
    "  gl_FragColor.rgb *= (lumin*contrast + 0.5*(2-contrast) + 0.2*contrast); "


    "  if(mapMode) { "
    "    gl_FragColor.rgb *= (1.0-shadowIntensity/3.0); "
    "  } "
    
    "  if(fadeMode) { "
    "    gl_FragColor.a *= 1.0 - shadowIntensity; "
    "  } "

    "}";

I'm thinking it could be like different versions of GLSL will interpret the code differently and while it'll still compile behave in a different way. Much like different browsers treat the same HTML/CSS differently. Any ideas? Cheers! Ben.

Share this post


Link to post
Share on other sites
Fiddler    860
1)

(a) ARB_imaging is not widely supported; it never was. You will find it in workstation cards (Quadro/FireGL) and that's it.

(b) ARB_imaging was deprecated in OpenGL 3.0 and removed in OpenGL 3.1+. It might be present in a "compatible" OpenGL 3.2 context, but it most likely won't be.

(c) The recommended approach is to duplicate ARB_imaging functionality using supported OpenGL facilities. Most of this extension is trivial to implement. Some parts are not (e.g. convolution filters), but these parts are more or less useless anyway.


2) You are using non-standard GLSL features. For example:

(lumin - gl_FragColor.gb)/2.0) + 1


Nvidia drivers will accept code such as this, but the code itself is not correct (it should be "+ 1.0", not "+ 1").

There is a way to turn on "standards mode" on Nvidia cards - use it. This will force you to right correct GLSL code and will improve compatibility.

Edit: your code above is also incorrect (you are assigning a float to a vec3).

[Edited by - Fiddler on November 8, 2009 11:18:44 AM]

Share this post


Link to post
Share on other sites
bencelot    204
Thanks Fiddler,

Do you know how to enable this standards mode? Google isn't helping me. Do I find it through my NVIDIA control panel or is it something that I have to define up the top of my shader code?

Also when you say assigning a float to a vec3 are you referring to this line?

gl_FragColor.rgb *= (1.0-shadowIntensity/3.0);

if so how would I write it? Would this work?

gl_FragColor = vec4(gl_FragColor.rgb*(1.0-shadowIntensity/3.0), 1);

Or is it something along the lines of not being able to read from gl_FragColor on some gfx cards, in which case I should be creating a new vec4 to work with and then just assigning this to gl_FragColor at the very end.

This all compiles and works fine on my computer, so it's hard to test where the bug is :s

Thanks!



Share this post


Link to post
Share on other sites
bencelot    204
Update!

I think I found what you mean by running in standards mode.

One at a time I put:
#version 100\n
#version 110\n
#version 120\n
#version 130\n

up the top of the shader to check for any errors that came up and fixed them all. I also rewrote the shader as such:



//THE SHADER CODE
shadowCode =
"#version 130\n"
"uniform sampler2D baseTexture;"
"uniform sampler2D shadowTexture;"
"uniform sampler1D shadowHeightTexture;"
"uniform bool mapMode;"
"uniform bool fadeMode;"
"uniform float contrast;"

"void main() {"

" vec4 baseColour = texture2D(baseTexture, gl_TexCoord[0].st); "

" float shadowIntensity = texture2D(shadowTexture, gl_TexCoord[1].st).a; "
" if(mapMode || fadeMode) { "
" if(gl_TexCoord[2].s < 0.5) { "
" shadowIntensity = min(1.0, shadowIntensity + (1.0-texture1D(shadowHeightTexture,gl_TexCoord[2].s).r) ); "
" } else { "
" shadowIntensity = max(shadowIntensity, (1.0-texture1D(shadowHeightTexture,gl_TexCoord[2].s).r) ); "
" } "
" shadowIntensity *= shadowIntensity; "
" } "

" baseColour *= gl_Color; "

" float lumin = (baseColour.r*0.3 + baseColour.g*0.59 + baseColour.b*0.11); "
" baseColour.gb *= (contrast*((lumin - baseColour.gb)/2.0) + 1.0); "
" baseColour.rgb *= (lumin*contrast + 0.5*(2.0-contrast) + 0.2*contrast); "


" if(mapMode) { "
" baseColour.rgb *= (1.0-shadowIntensity/3.0); "
" } "

" if(fadeMode) { "
" baseColour.a *= (1.0 - shadowIntensity); "
" } "

" gl_FragColor = baseColour; "

"}";




Which I think is better, though I'm yet to test it on an ATI card.

I did get one weird thing however when I tried adding #version 140\n up the top. I got an error message saying:

"0(2) : error C7533: global variable gl_FragColor is deprecated after version 120"

which is kinda weird considering that #version 130 worked. Any ideas why this might be?

Also should I leave the "#version 130\n" in there for release, or is it just used for testing purposes?

Cheers!

[Edited by - bencelot on November 8, 2009 7:49:59 PM]

Share this post


Link to post
Share on other sites
Fiddler    860
gl_FragColor is considered deprecated starting with OpenGL 3.0. You are supposed to provide and bind your own output variable(s) - check the specs on how to do this.

In general, I'd suggest using the earliest #version that can compile your shader (if only to improve compatibility). #version 130 and higher require newer new graphics cards and drivers to run and generally change the shader code syntax ("in" vs "attribute", explicit output variables etc), so if you don't need any of their new features it would be simpler to avoid them for now.

Share this post


Link to post
Share on other sites
bencelot    204
Ahh ok.

Well it compiles just fine using "#version 100\n" so I guess I should use that? My only concern is that maybe this older version runs slower or will one day become depreciated?

If I don't declare any version at all will the compiler just use the latest version available to it or will it check older version until it finds one that's compatible?

Share this post


Link to post
Share on other sites
bencelot    204
Oh, and 1 more question :)

This is in relation to the original question. The reason I was checking for GLEE_ARB_imaging is because I need to use glBlendEquation(GL_MAX). If ARB_imaging is no longer supported what can I do?

1) Can I simply check for glBlendEquation some other way?

or 2) Will I have to duplicate this functionality using supported functionality. If so any ideas?

Cheers

Share this post


Link to post
Share on other sites
Fiddler    860
glBlendEquation is supported if OpenGL version is 1.2 or higher. It is also available in forward compatible GL 3.x contexts (i.e. it's not deprecated, even if ARB_imaging is).

In other words, simply check if version >= 1.2. If it is, you can use this function at will.

There shouldn't be a performance difference between adding "#version 100" or not. I'd suggest checking the OpenGL shading language specification to see what this declaration really does.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now