• Advertisement
Sign in to follow this  

OpenGL Universal OpenGL Version

This topic is 1765 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Developing a game with OpenGL, which OpenGL version would have no problem running on most computers? For both Hardware and Platform (PC/MAC/LINUX).

Some say 2.x because it's older and everyone is capable of running older versions with what ever hardware they might have.
Now wouldn't 3.x be much better in terms of performance, better tools and cool effects with programmable pipeline?
 
I'm a bit lost on this subject, heard that Mac can't even go beyond 3.2, and what about Linux?
Any feedback would be helpful, thanks smile.png

Share this post


Link to post
Share on other sites
Advertisement

There is a tradeoff between features and audience size. Increasing the minimum system requirements gives you greater abilities but may potentially decreases your audience size. What is more important to you, graphics fidelity or broadest possible audience? If it's the former, go with OpenGL 1.1, if it's the latter, go with OpenGL 4.3, if it's somewhere in between... Nobody can tell you whats best for your game. Are you making a FarmVille or are you making a Crysis? What features do you feel you need to reach your artistic goals? Picking the minimum spec that gives you what you need is probably the best option.

Edited by Chris_F

Share this post


Link to post
Share on other sites

I've seen quite a few projects start like this. "Which is the best OGL for compatibility?" And they land on OpenGL 2.1.

 

It is true, pretty much anything you grab will support OGL 2.1 (grab a can on the street, it supports 2.1). The thing is that I've seen projects like this going on for years, while OGL 2.1 was a good idea 3 or 4 years ago, compatibility wise, today? When their game hits the street? Not so much.

 

So I'd pick up something from OpenGL 3.0 and upwards.

Share this post


Link to post
Share on other sites

I've worked on a commercial OpenGL game for several years and most of my work was in the graphics part of the code.  Speaking from experience, most of the problems we ran into was due to people not having up-to-date OpenGL drivers installed.

 

Most people (not most hard-core games, but most casual and non-gamers) have integrated graphics solutions (integrated Intel or mobile AMD/NVidia in a laptop) and rarely or never update their drivers from when they first get their machine.  It works well enough for them to surf the web, e-mail, do their work (editing Word/Excel/PowerPoint docs) that they never have an urgent need to update their video drivers.  Also, many of them feel that updating drivers is a difficult thing to do (too technical for them) and are afraid that they will mess up their system.

 

In addition, the OpenGL support of integrated video chipsets is not necessarily the best to begin with.  And Intel/AMD/NVidia do not provide updates for their older integrated video chipsets which are still in use by many people.  So, some of these people were stuck with older drivers with known bugs in the OpenGL drivers.

 

In reality, there are a lot more games that use DirectX than use OpenGL (easily 10 to 1 ratio).  So, Intel/AMD/NVidia have not had too much incentive to keep the quality of their OpenGL drivers on par with the quality of their DirectX driver.  But, the quality of the OpenGL drivers in the past few years has greatly improved.

 

So, the good news is that the quality of OpenGL drivers is improving.  The bad news is that a lot of people are still using (or stuck with) older, buggy OpenGL drivers.

Share this post


Link to post
Share on other sites

Also, many of them feel that updating drivers is a difficult thing to do (too technical for them) and are afraid that they will mess up their system.

Honestly, this has mainly to do with one of the most common recommendations to install drivers, which is to go to safe mode, uninstall the old driver and install the new one. You don't have to, but it isn't hard to see where the issue lies. Besides, people think that if it already works as-is it's probably fine, not realizing that old drivers may be leaving features unused (e.g. the drivers bundled with the GeForce 7 use OpenGL 2.0, but the newest drivers provide OpenGL 2.1).

 

 

In reality, there are a lot more games that use DirectX than use OpenGL (easily 10 to 1 ratio).  So, Intel/AMD/NVidia have not had too much incentive to keep the quality of their OpenGL drivers on par with the quality of their DirectX driver.  But, the quality of the OpenGL drivers in the past few years has greatly improved.

It's a chicken-and-egg situation, if nobody uses OpenGL, there's no incentive to improve its support, which in turn means nobody wants to use it, and... well, it's a self-feedback loop. I think id is pretty much the only reason it didn't die completely. At least OpenGL 3 seemed to have gotten all vendors back into OpenGL, just because apparently it had enough of a reputation to make lack of support look stupid (maybe the backslash when Vista was implied to lack OpenGL support was a hint, even if it turned out to be false later).

 

 

The bad news is that a lot of people are still using (or stuck with) older, buggy OpenGL drivers.

I wouldn't expect those to care about gaming anyway ^^; (or to have something that supports anything newer than 1.1 for that very reason...)

 

EDIT: basically, if you care about people with old systems (especially people in e.g. developing countries, where hardware can be considered quite expensive), OpenGL 2 may be a good compromise. If you expect some decent hardware, OpenGL 3 would be better. I'd say that OpenGL 4 would be better if considered optional for now unless you really need the most powerful hardware (i.e. support it if you want but don't assume it'll be very common).

 

If somebody is stuck with OpenGL 1 that's most likely the kind of people you wouldn't want to bother targetting anyway... Either their hardware is pretty weak and will slow down without much effort or they're the kind of people who'd rather stick to browser games (if they play games at all).

Edited by Sik_the_hedgehog

Share this post


Link to post
Share on other sites

The bad news is that a lot of people are still using (or stuck with) older, buggy OpenGL drivers.

I wouldn't expect those to care about gaming anyway ^^; (or to have something that supports anything newer than 1.1 for that very reason...)

 

You would be surprised.  Check out the Steam or Bethesda forums for Rage - there was an awful lot of so-called "hardcore gamers" who had issues because they never bothered updating their drivers, not to mention an awful lot more who had issues because they were randomly copying individual DLL files all over their systems without any clear knowledge of what they were doing.  (It's also a good example of how poor driver support can mess up a game.)

Share this post


Link to post
Share on other sites

Many thanks for the feedback everyone.

Turns out this is the ugly part of game dev, hopefully pumping up the system requirements and some proper error handling, will make people aware of what they need.

I'm targeting people with decent computers, something that can render 3D graphics with post processing at a playable fps, I really REALLY want to avoid the old pipeline, it's just seems dirty, do some newer AAA games even use old pipeline these day?

 

For example I am interested to know what versions of OGL do Valve use for their games on MAC?

 

And I'll probably just end up going with 3.2. seems to be a better choice.

Share this post


Link to post
Share on other sites

Theoretically you could also program in a relatively modern style with VBO and Shaders even with 2.1, if you accept a few quirks and dont need all the new features.

If you can accept people with weak onboard chips not getting to play then 3.x should be fine.

Share this post


Link to post
Share on other sites

I'm targeting people with decent computers, something that can render 3D graphics with post processing at a playable fps....

 

In that case go for 3.x - it's all achievable with earlier versions for sure, but you'll have a much nicer time using 3.x.

 

One project I was involved in up to maybe this time last year (where initially I had thought I was being brought in just to optimize the renderer), one of the leads was absolutely insistent on the "what about older hardware?" line but yet was also pushing very heavily for lots of post-processing, lots of complex geometry, lots of real-time dynamic lighting, etc.  I ended up with an insane mixture of core GL1.4 with a software wrapper around VBOs, ARB assembly programs, glCopyTexSubImage2D, multiple codepaths for everything and an edifice so fragile that I was terrified of even bugfixing it (the fact that it was build on an originally GL1.1 codebase that was fairly crankily and inflexibly maintained to that point didn't help).  It was a nightmare - I walked out one day without saying a word and just didn't come back.

 

It's just not worth going down that route - you'll only burn yourself out.  So either dial back the ambitions and use an earlier version, or else keep the ambitions and use the most reasonable sane recent version.  But don't try to mix the two.

Edited by mhagain

Share this post


Link to post
Share on other sites

Hi,

 

I have what I believe could be a relevant question here which I am actually handling in a job project to make a 2D game with jMonkey that can run through OpenGL on WinXP or higher.

 

OpenGL 2.1 which my jMonkey installation has is my heavy favorite for WinXP or higher compatibility.  I don't need any advanced OpenGL features.   Am I on the right track? 

 

Where can I get information on what version of OpenGL ships with WinXP, Vista, Win7, and Win8?  (Really I am only interested in WinXP to meet the minimum OpenGL requirements.)

 

smile.png

Edited by 3Ddreamer

Share this post


Link to post
Share on other sites

All versions of Windows ship with OpenGL 1.1 (with a small handful of extensions), but this is a software-emulated OpenGL.  The key thing here is that OpenGL is not software so it doesn't really make sense to talk about "what version of OpenGL ships with Windows".  OpenGL is implemented in your 3D card's driver, so it's shipped by the 3D hardware vendor.

Share this post


Link to post
Share on other sites

I recommend OpenGL 3.3 (or 3.2 if you want to target Mac as well) for two reasons. The first reason is a technical one, the second is an economic one.

 

OpenGL 2.x quickly becomes a real nightmare, unless you only ever do the most puny stuff. You have barely any guarantees of what is supported, and many things must be implemented using extensions. Sometimes there are different ARB and EXT extensions (and vendor extensions) all of which you must consider, because none of them is supported everywhere. Usually they have some kind of "common functionality" that you can figure out, but sometimes they behave considerably different so you must write entirely different code paths for each. Some functionality in the spec (and in extensions) is deliberately worded in a misleading way, too. For example, you can have a card that supports multiple render targets, with a maximum of one. You can have a card that supports vertex texture fetch with zero fetches.

Most things are kind of obscure or loosely defined, for example you have no guarantee that textures larger than 256x256 are supported at all (you must query to be sure, but what do you do in the worst case?).

 

Put that in contrast to OpenGL 3.3 where you have almost everything that you will likely need (except tesselation and atomic counters, really) guaranteed as core functionality. You have guaranteed minimum specs that must be supported. For almost everyone, these guaranteed minimums are good enough so you never have to think about them. A shader language that just works. No guessing.

 

The economic reason why I would not go anywhere below GL3 is that it rules out people that you likely do not want as customers (I wouldn't want them anyway!). GL3 compatible cards have been around $20 for about 5 years. Integrated cards support GL3 in the mean time as well (of course Intel was never a role model in OpenGL support, but most stuff kind of works most of the time now). If someone cannot or does not want to spend $20 on a graphics card, it's unlikely he will pay you either. They'll probably only pirate your stuff. Why should you burden yourself running after someone who you know isn't going to pay you?

 

About outdated drivers, my stance is that typing "nvidia driver" into Google and clicking "yes, install please" is not too much of a technical challenge. My mother can do that. If someone is unable (or unwilling) to do this, they are likely also people that you do not want as customers. Dealing with people who cannot type 2 words and do 2 mouse clicks is a customer service nightmare. They cannot possibly pay you enough money to make up for that.

Share this post


Link to post
Share on other sites

 I think the sweet spot is 3.3, but since mac only supports 3.2 it leaves them out.  3.3 is akin to 4.0 but for dx10 cards, so it modern version but for legacy cards too.  

Share this post


Link to post
Share on other sites

Well, the Macs are about 1/4 of the target market according to research in my case.  So it seems that implementing upto 3.2 and a notification message for the user to update OpenGL if needed will be in order.

 

I have no idea yet how to jump from default 2.1 to 3.2 with jMonkey but I am sure the community there has the method, likely done at the tool level (having development software updated for OpenGL 3.2).

 

Thanks! smile.png

Share this post


Link to post
Share on other sites
You could always take a look at the Steam Hardware Survey. They don't directly check OpenGL support (not sure why not) but you can at least get an idea of what cards are being used by gamers more by checking the video card results. That might give you a bit of insight anyways.

Share this post


Link to post
Share on other sites

For example, you can have a card that supports multiple render targets, with a maximum of one. You can have a card that supports vertex texture fetch with zero fetches.

ATI pulled that one on D3D9c as well -- to be D3D9c compliant, you have to support MRT and VTF, so ATI returned true for those caps, but then also returned 1 for the MRT max count, and returned false for every texture format when querying if it was a VTF-capable format... sad.png

Share this post


Link to post
Share on other sites

For example, you can have a card that supports multiple render targets, with a maximum of one. You can have a card that supports vertex texture fetch with zero fetches.

ATI pulled that one on D3D9c as well -- to be D3D9c compliant, you have to support MRT and VTF, so ATI returned true for those caps, but then also returned 1 for the MRT max count, and returned false for every texture format when querying if it was a VTF-capable format... sad.png

 

GL_ARB_occlusion_query allows the query counter bits to be 0 - what's worse is this was a deliberate decision by the ARB made so as to allow vendors that don't support occlusion queries to be able to claim GL1.5 support; see http://www.opengl.org/archives/about/arb/meeting_notes/notes/meeting_note_2003-06-10.html for more info on that one.

Share this post


Link to post
Share on other sites

I remember reading an nVidia employee's response on OpenGL.org to a poster's annoyance that the noise function was always returning 0. The response was (and I'm paraphrasing) "the specs state to return a number in the range [0,1], therefore returning 0 conforms to the spec".

Share this post


Link to post
Share on other sites

Sorry if this is somewhat offtopic, but the 3 posts above mine (especially GeneralQuery's mentioning NVIDIA), reminds me of the time when I was porting some code from Direct3D8 to Direct3D9 from an older version of the NVSDK (5.21).  I was particularly interested in the bump refraction demo submitted from Japan using a proprietary texture format "NVHS".  I never found anywhere in NVIDIA's documentation that this texture format was only supported on the GeForce 3 and 4 Ti series GPUs, so I was getting upset that I couldn't get the feature to work on my 8400 GS M.  I assumed it was just a problem with my drivers, but to be sure I asked some other people to verify that it does or doesn't work on their machines. Turns out that when checking the device caps, the driver claims the texture format was supported on all NVIDIA cards, but creating the texture using that format would always fail unless your GPU was from the NV2x series.

 

I tried to warn NVIDIA of this driver bug, but to no avail.  It's not too relavent now since no one used that format (Q8W8V8U8 was more compatible anyway), and DirectX9 is dying slowly but surely either way.

Share this post


Link to post
Share on other sites

It may be veering off-topic, but all of this does serve to highlight one key point that is relevant to the original post.  That is: vendor shenanigans are quite widespread, and no matter which API (or which version of an API) you choose, you do still have to tread a little carefully.

Edited by mhagain

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
  • Advertisement