Sign in to follow this  
stella1016

OpenGL Unrecognized OpenGL version

Recommended Posts

stella1016    122
I am using qt (+opengl) to implement a program. I need to test it on different OS. It has problem in Fedora 12 and Ubuntu 9.10. The terminal shows "Unrecognized OpenGL version", and the display is frozen. I checked the opengl version on different OS. The OSs which are working have lower opengl version. for example: OpenSUSE 10.3 32bit: server glx version string: 1.2 client glx version string: 1.4 OpenGL version string: 1.4 (2.1 Mesa 7.0.1) The OS which doesn't work: Ubuntu 9.10 32bit: server glx version string: 1.2 client glx version string: 1.4 OpenGL version string: 2.1 ( Mesa 7.6) OpenGL shading language version string: 1.20 Fedora 12 32bit: server glx version string: 1.2 client glx version string: 1.4 OpenGL version string: 2.1 Mesa 7.7-devel OpenGL shading language version string: 1.20 I am thinking about the problem might be: higher opengl version doesn't support my program. Am I right? Does anyone know the solution? Thanks in advance.

Share this post


Link to post
Share on other sites
stella1016    122
I found some hint in OpenGL website:
--------------
23.040 How can I code for different versions of OpenGL?

Because a feature or extension is available on the OpenGL development environment you use for building your app, it doesn't mean it will be available for use on your end user's system. Your code must avoid making feature or extension calls when those features and extensions aren't available.

When your program initializes, it must query the OpenGL library for information on the OpenGL version and available extensions, and surround version- and extension-specific code with the appropriate conditionals based on the results of that query. For example:

#include <stdlib.h> ... int gl12Supported; gl12Supported = atof(glGetString(GL_VERSION)) >= 1.2; ... if (gl12Supported) { // Use OpenGL 1.2 functionality }
--------------

What is the difference between Opengl version 1.4 and 2.1??

Share this post


Link to post
Share on other sites
The main difference between them is that OpenGL 2.1 has buffer objects and shaders and is much more up-to-date. It is what most of the hardware in the wild supports, too.

It is fully backwards compatible with version 1.4 unlike the more recent versions, such as 3.3 and 4.0. Those more recent versions add features that are only found in the more recent GPUs and drop functionality which is not supported in hardware any more.
The 3.0 and 3.1 versions are "in between" as they still do support old features, but consider them deprecated.

Share this post


Link to post
Share on other sites
stella1016    122
But still, I have problem with my application.

I compiled my application in OpenSUSE 10.3, and executed it in Ubuntu 9.10. The problem comes only in Ubuntu 9.10, and the difference I could find is opengl version. If opengl 2.1 is backwards compatible, I don't know why the problem comes....

Share this post


Link to post
Share on other sites
HuntsMan    368
That message means Qt couldn't parse the version string from OpenGL (glGetString(GL_VERSION)), probably because it's malformed, and that would be a driver bug.

You can check how that message is triggered here

Share this post


Link to post
Share on other sites
Kambiz    758
Are the appropriate drivers installed? What is the vendor string? Mesa appearing in the output you have posted makes me believe you are using the software rasterizer. I use Qt for my OpenGL applications on Ubuntu 9.4 and 9.10 and I had no problems with the NVidia drivers installed.

Qt seems to expects the version string to look like for example "1.4" but not like "1.4 (2.1 Mesa 7.0.1)". NVidia and ATI drivers actually return in the expected format.

Share this post


Link to post
Share on other sites
stella1016    122
OK, sth I missed to mention.
I am running my OS in VM.

But I tested 'real' OS of Ubuntu 9.10 as well. There I got more strange result: one account is with OpenGL version 1.4 (my app works), another account with 2.1 (my app not working). The hardware is the same. How could this happen? Install additional libs?

Share this post


Link to post
Share on other sites
Kambiz    758
Quote:
Original post by stella1016
With the NVidia drivers installed, my collegue's machine also works.

So the driver is a must for my app? What if on VM?


VMs have limited support for hardware accelerated graphics, VirtualBox for example has some DX/GL drivers for Windows (as guest OS).
OpenGL is an API to make use of the GPU, when no hardware acceleration is available why would you want to use OpenGL? Well, if needed there is Mesa, a software implementation (much slower than hardware). You could try making Qt parse the GL version that is returned by Mesa.

Share this post


Link to post
Share on other sites
stella1016    122
But why this differs for different version of Ubuntu (all based on VM)? Other OS doesn't have the real hardware to support, but they worked, and this Ubuntu 9.10 doesn't.

I am thinking about some init error of my app....

Share this post


Link to post
Share on other sites
Kambiz    758
Quote:
Original post by stella1016
But why this differs for different version of Ubuntu (all based on VM)?

Maybe different versions of mesa. How does the version string look like when it works?

Share this post


Link to post
Share on other sites
V-man    813
If QT has some issue, then update it or don't use it. I use SDL myself and I check the GL version myself. Gl 1.4 didn't have vbo. Then vbo was added to 1.5. 2.0 got shaders. 2.1 got pbo. I am citing that from memory. Check the docs if you want.

Why your code doesn't work on some ubuntu with 2.1... I don't know. You have the source code.

Share this post


Link to post
Share on other sites
EngineCoder    246
Quote:
Original post by stella1016
But I tested 'real' OS of Ubuntu 9.10 as well. There I got more strange result: one account is with OpenGL version 1.4 (my app works), another account with 2.1


Make sure both accounts are in "video" group.

Share this post


Link to post
Share on other sites
stella1016    122
Quote:
Original post by EngineCoder
Quote:
Original post by stella1016
But I tested 'real' OS of Ubuntu 9.10 as well. There I got more strange result: one account is with OpenGL version 1.4 (my app works), another account with 2.1


Make sure both accounts are in "video" group.


Sorry that it take long time until I reply for you who kindly tried to help.

What do you mean "video" group? How to check this out?

Share this post


Link to post
Share on other sites
stella1016    122
Here I talk more details about my app.

My application uses the concept of QT modules, which means it can load detected *.dll files automatically.

I have two modules among them, module1 using QGraphicsView (no direct gl function), another module2 using QGLWidget (direct use of gl function). A flag in module1 is read from a file to decide whether using OpenGL to accelerate rendering (not sure, done by other people). My module2 is the module having problem.

In normal OS system, no matter the flag is 0 or 1, display has no problem.

In Ubuntu 9.10, if the flag is 0, module1 is working, module2 not; if the flag is 1, module1 is not working, module2 works.

There is only on command differency in module1 for the flag on or off:
if(b_opengl_monitor)
{
this->setViewport(new QGLWidget());
}

"this" inherits class QGraphicsView.

Now I am confused, why this flag effects my module2???


Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
  • Popular Now