Sign in to follow this  

OpenGL OpenGL Connundrum

This topic is 1648 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

I'm looking for some opinions on a cunnundrum I came across today.  This applies to a project that I'm in no position to change the parameters, so please don't offer suggestions along those lines.

 

The project, which has been in development for about a year, uses OpenGL 2.1 with GLSL 1.20.  Today, it was discovered that it will not run on Win 8 machines which do not have an OpenGL driver (this can happen for numerous reason, e.g. a user installs Win 8 over Win 7 without checking for newer drivers).  In this case, Win 8 will attempt to emulate OpenGL version 1.1 through GDI.  The project specification states that it must work under this scenario.

 

So, it seems that I'm left with two options:  either create a branch that uses GL 1.1 (currently, my rendering is done in shaders, so this seems substantial) or create a DirectX branch.  Neither seems appealing, but which would be better (least evil)?  Going GL 1.1 seams faster to develop, but the results will look pretty ugly.  Is there another route that I'm missing?

 

Regards!

Share this post


Link to post
Share on other sites

If the user's machine doesn't meet the requirements, why not tell them so? It is entirely expected to be asked to download current drivers if the drivers are out of date.

Share this post


Link to post
Share on other sites

You may also encounter problems with "switchable graphics" laptops, particularly those with AMD GPUs, where driver support is quite poor.

 

I have to oppose to this, I've got a laptop with Mobility Radeon HD 5470 and driver fully supports OpenGL 4.2 and some further extensions. Of course the performance of F.e. tessellation is imcomparable to F.e. Radeon 6870 that I've got in desktop, but...

 

Even further, the Linux drivers (as I'm mainly Linux user, but run both major OS on my computers) supports same version of OpenGL and same extensions.

 

EDIT: Although I can't state about older AMD gpus...

Edited by Vilem Otte

Share this post


Link to post
Share on other sites

Thank you for the info mhagain.

 

The spec states specifically that we cannot require changes to the user machine to use the software, so requiring a driver update is out of the question (FWIW, we can't even recommend this).

 

Unfortunately, this project must also be portable (including GL ES), which is why I didn't want to go the D3D route.   Maintaining two code paths sounds horrible, but I think I can do the D3D "emulation" layer with a lot less effort.  Cheers!

Share this post


Link to post
Share on other sites

Unfortunately, this project must also be portable (including GL ES), which is why I didn't want to go the D3D route.   Maintaining two code paths sounds horrible, but I think I can do the D3D "emulation" layer with a lot less effort.  Cheers!

 

Looks like ANGLE is the solution you want then; just use your ES2.0 path on Windows and send it through ANGLE to handle the emulation.

 

I have to oppose to this, I've got a laptop with Mobility Radeon HD 5470 and driver fully supports OpenGL 4.2 and some further extensions. Of course the performance of F.e. tessellation is imcomparable to F.e. Radeon 6870 that I've got in desktop, but...

 

Even further, the Linux drivers (as I'm mainly Linux user, but run both major OS on my computers) supports same version of OpenGL and same extensions.

 

I wouldn't dispute that it works fine for you, but it is a fact that AMD switchable graphics drivers on Windows have historically been a massive PITA for many users, with AMD themselves not even supporting updates using their regular driver packages.(see e.g. http://support.amd.com/us/kbarticles/Pages/AMDCatalystSoftwareSuiteVersion126ReleaseNotes.aspx and note the section headed "The following notebooks are not compatible with this release").  See here for more discussion.

 

It's all irrelevant anyway as the OP has made it clear that no changes to the user PC is part of the requirements.

Share this post


Link to post
Share on other sites

I'm unable to use ANGLE, although it looks really interesting.  If I do go the D3D emulation route, I'd have to incorporate it into our libraries.  So, I'll wait a bit longer to see if another suggestion comes up.

Share this post


Link to post
Share on other sites

Sounds like your core issue is not having your render code abstracted enough.

 

Off the top of my head something like:

class IVertexBuffer { // Abstract Base Class 
     void Bind() = 0;
     void Unbind() = 0;
}

class GL2xVertexBuffer {
    // Contains a VAO
}

class GL1xVertexBuffer {
    // Contains a list of vertices
}

class IIndexBuffer { // Again abstract
   void DrawCall() = 0;
}

// Subclass that draws gl2x path, subclass that draws gl1x path

class IMaterial { // Still Abstract
    bool m_bIsLit;
    Color m_colorAmbient;
    Color m_colorDiffuse;
    Color m_colorSpecular;

    void SetState() = 0;
    void ClearState();
}

class GL2xMaterial {
    void SetState(); // Bind shader, bind shader properties
}

class GL1xMaterial {
    void SetState(); // Configure fixed function states
}

void Render(IVertexBuffer& vertexBuffer, IIndexBuffer& indexBuffer, IMaterial& material) {
    material.SetState();
    renderBuffer.Bind();
    indexBuffer.DrawCall();
    renderBuffer.Unbind();
    material.ClearState();
}

Sounds like what your assignment really wants. Then you just need some utility code:

IVertexBuffer* CreateVertexBuffer() {
    if (g_bSystemSupportsGl2X)
         return new GL2xVertexBuffer();
    return new GL1xVertexBuffer();
}

Share this post


Link to post
Share on other sites

 

Sounds like your core issue is not having your render code abstracted enough.

 

That's not going to solve the problem of a Windows PC without a proper OpenGL driver going through software emulation and getting 1fps though.

 

Yes, one could use such an abstraction to include a D3D layer relatively easily, but there are complexities that - if you come at it primarily with a GL hat on - are going to cause you trouble:

  • No requirement for "bind to modify" in D3D.
  • Shader uniforms are per-program object in GL, global-and-shared in D3D.
  • Differences between D3D "streams" and GL "attribs".
  • Hardware capabilities that D3D enforces but GL emulates in software if absent.
  • Etc etc etc.

@OP - is the reason for not being able to use ANGLE the BSD license?  TitaniumGL is another option that's freeware but only supports GL1.4; no shaders but better at least than software-emulated 1.1 (I've little experience of it so can't vouch for it's quality).

Share this post


Link to post
Share on other sites

Nothing is going to solve the problem of not having proper drivers. Nothing.

 

From the original post i gather that the game must work under OpenGL 1.1 emulated environment (Hence no ANGLE). There was no mention of it needing to be performant. 

 

 

It honestly sounds to me like the idea of the assignment is to implement multiple render paths, which is what the poster seems to want to avoid.

 

Could be wrong tough....

Share this post


Link to post
Share on other sites
Mesa (http://www.mesa3d.org/) should provide horribly slow but advanced software rendering, allowing you to meet the UNREASONABLE requirement of not installing drivers without abandoning OpenGL 2.1 (which is already highly obsolete).

This advice is only a fallback from my real advice, which is running away from such a project/team/company and letting your (former) manager deal with self-imposed trouble.

Share this post


Link to post
Share on other sites

@OP - is the reason for not being able to use ANGLE the BSD license?  TitaniumGL is another option that's freeware but only supports GL1.4; no shaders but better at least than software-emulated 1.1 (I've little experience of it so can't vouch for it's quality).

 

 

It has to do with more requirements, but that may be based on legal; I really don't know.  Thanks for the suggestions though!

Share this post


Link to post
Share on other sites

 


@OP - is the reason for not being able to use ANGLE the BSD license?  TitaniumGL is another option that's freeware but only supports GL1.4; no shaders but better at least than software-emulated 1.1 (I've little experience of it so can't vouch for it's quality).

 

 

It has to do with more requirements, but that may be based on legal; I really don't know.  Thanks for the suggestions though!

 

 

Couldn't you statically link to ANGLE for your Windows build?

Share this post


Link to post
Share on other sites

Just an update for anyone reading this in the future:  I implemented a GL 1.1 render path and the results were ~10fps @ 1080p and ~35fps @ 1280x1024.  This is compared to ~80fps and ~100fps when rendering as non-emulated GL 2.1.

Share this post


Link to post
Share on other sites
That's bad, but not terrible. It is a shame the parameters of the project are so strict. There are easier ways...

Share this post


Link to post
Share on other sites

This topic is 1648 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now