• Advertisement
Sign in to follow this  

OpenGL Should I wait for OpenGL NG?

This topic is 1237 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have previous experience with writing shaders and rendering but I never really learned OpenGL. I have recently decided to start working on my own graphics project, hopefully being able to expand that into something bigger if my ideas work out, but I'm not sure if it'd be worth it to start right now.

 

Just to give some info on what I'd like to do, I would like to generate most of my content on the GPU, so that means I'd be using gpgpu and probably compute shaders with very modern (4.3+) OpenGL at the minimum. I'm targetting high framerates with very low latency because I'd like to try to get things working well on an oculus rift at some point, and I'm assuming that that would mean modern topics on reducing driver overhead and latency would also be relevant.

 

My dilema right now is that I'm not sure if it would be wise or even worth the effort(and pain) to learn the 800+ page core specification, dig around and try to figure out the current best-practices for azdo and gpgpu (and cross my fingers that any of it is actually supported properly) and all of that while there's a completely new specification on the horizon that is supposed to address these things.

 

Any thoughts and resources would be helpful.

Share this post


Link to post
Share on other sites
Advertisement

What exactly does OpenGL NG offer that you are going to take advantage of that is not currently available in the latest version ( 4.5 ) as of this post ? There is alway doing to be driver overhead, unless you have direct access to the hardware...most of the technique to lower driver overhead and whatnot is currently available in the latest GL version..My silly advice to you is to learn the API first instead of waiting for features that you may/may not use.

Share this post


Link to post
Share on other sites

I'll qualify this post first by stating that I know very little of OpenGL NG.  That said, I think investing in OpenGL 4.3+ at this time is still a good move. 

 

 

First, it sounds like GLSL syntax may not change much, we're getting a new standardized intermediate format which is long overdue but your shaders may require little (if any) modification for the new version.

 

Second, learning how to reduce OpenGL driver overhead means learning about how the hardware works.  This will serve you well regardless of API, and using these techniques you should be able to get performance pretty close to what a lower level API would get you.  Plus you should have the advantage of wider hardware compatibility.

 

Third, you know that claiming you would need to learn an 800+ page specification is gross exaggeration, right?  There's a ton of stuff you won't need for your application, even in core profile.  Also, I know it looks like there's a lot of functions but remember a ton of those are OpenGL's version of function overloads.  The "real" API isn't that scary. 

 

For my use I have my GL code wrapped up in a (simplified) D3D11-style device/context interface so they look identical to my renderer.  This helps break the API usage into nice bite-sized chunks that are easy to understand.  I'm also using hard-coded shaders to avoid HLSL/GLSL translation issues at the cost of some flexibility.

 

Finally, I noticed UE4 is basically wrapping IOS 8's low level Metal API in a higher-level manager that makes it behave a bit of more D3D11, which sits just under the RHI, and they're still getting huge performance gains.

Share this post


Link to post
Share on other sites

What exactly does OpenGL NG offer that you are going to take advantage of that is not currently available in the latest version ( 4.5 ) as of this post ? There is alway doing to be driver overhead, unless you have direct access to the hardware...most of the technique to lower driver overhead and whatnot is currently available in the latest GL version..My silly advice to you is to learn the API first instead of waiting for features that you may/may not use.

 

There really isn't a lot of information out right now, but the next generation OpenGL initiative is a ground-up redesign that [paraphrasing] aims to streamline the api for easier use and implimentation, act to unify and bring modern GPU access to all platforms, provide a standard intermediate shader format for greater portability of shader programs, provide enhanced conformance testing methodology, provide explicit application control over GPU and CPU workloads, be multithreading friendly, greatly reduce CPU overhead and provide full support for tiled and direct renderers.

 

It will not be backwards compatible with the old specification, which has been bulking up for over 20 years, and that's really what I'm trying to ask - is it worth learning at this point? I've read a lot of criticisms of the current specification (see: http://www.joshbarczak.com/blog/?p=154 and http://richg42.blogspot.com/2014/05/things-that-drive-me-nuts-about-opengl.html ), and they're not alone it seems.

 

As someone who wants to work with primarily the modern parts of OpenGL (for both computing and managing lots of data asynchronously), it's difficult for me to parse through what's an old part of the specifcation, what's newer and a better practice, what examples are using pure legacy code and should be avoided, what extensions are available and what are appropriate to use... the books that I can find only go up to OpenGL 4.3 but there are some major additions to the core profile in the past few years (direct state access, buffer placement control, efficient asynchronous queries, efficient multiple object binding, flush control...) that I could see causing significant changes to how someone manages their project. I've been reading through the sixth edition of the OpenGL superbible, one of the most up-to-date references I could find for getting started (published last year), and it's still hard to trust the format that's provided.

 

I understand that I wouldn't _have_ to read the entire 800+ page specification, but at the same time I sort of would. At the end of the day you really don't know what you don't know and with something that's been around as long as OpenGL has and is as quick of a moving target, it's difficult to trust what you're doing without going through the entire spec first. If anyone knows a trustworthy and well up-to-date and informed source for learning, including the current best practices, that would be super helpful.

Share this post


Link to post
Share on other sites
GLNG is a complete unknown at this point; it could be out next year, or in five years.
If you're ok with attaching your project to that kind of waiting game, then sure, wait...

You say you don't know GL - do you know any other graphics APIs?
Graphics APIs are a lot like programming languages - learning your first is hard, but learning new ones after that is easy.
If you haven't learned one before, then jump into GL or D3D11 now, so that when GLNG actually exists you'll be able to pick it up quickly.

Share this post


Link to post
Share on other sites

It will not be backwards compatible with the old specification, which has been bulking up for over 20 years, and that's really what I'm trying to ask - is it worth learning at this point?

You may not have been around the last time they tried to do this, but [spoiler alert] it failed spectacularly.

 

I have slightly higher hopes this time around, but I wouldn't exactly hold my breath...

Share this post


Link to post
Share on other sites

@Hodgman No I don't know any graphics API at the moment, the experience I have is mostly limited to shading languages like GLSL and some math and theory behind light and rendering, reading papers and things like that.

 

I'd love an api to mess with and try implementing ideas, I just wanted some guidance before digging too deep and regretting it, but I think you're right about getting over the first one and the future ones being easier to pick up.

 

If that's the case though I'm not sure, should I necessarily start with the current OpenGL?. What would be a great "baby's first modern graphics api"?

 

edit: @swiftcoder

 

This is different than the 3.0 stuff which had legacy support. I believe the new api will represent a clean break from the current OpenGL, and they appear to have the major companies and hardware support behind it.

Edited by tetron

Share this post


Link to post
Share on other sites

If that's the case though I'm not sure, should I necessarily start with the current OpenGL?. What would be a great "baby's first modern graphics api"?

I'd personally recommend D3D11 over GL (and "practical rendering and computation with direct3d 11" as a reference book)... but if you already have the GL super-bible then you may as well give it a shot.
 

This is different than the 3.0 stuff which had legacy support. I believe the new api will represent a clean break from the current OpenGL, and they appear to have the major companies and hardware support behind it.

GL 3 (aka GL Longs Peak) was supposed to be a clean break from OpenGL, throwing out backwards compatibility and a decade of cruft so that the new API would correctly map to GPUs of the time.... but they did a backflip late in the process and released GL 3 instead...
So going by history, GL NG might still end up being cancelled and just be released as GL 5.x sad.png

 

It's unlikely for them to backflip again, as D3D12 / Mantle are already going down this same path (clean break, new API based around modern GPU concepts, minimal abstraction) creating a good bit of pressure on Khronos to actually succeed this time.

Share this post


Link to post
Share on other sites

You have the choice of using an API that is readily available and works (with some annoying quirks, but it does work) just fine for thousands of applications, including high performance ones on the one hand side, and using an API that may be laid down in 6 months or a year and that may change, and which will not be supported before a long time in the future.

 

OpenGL as it is maybe not perfect, but it is still very capable and (some quirks and personal feelings left aside) entirely sufficient for what 95% of all people are doing.

 

Insofar, it is somewhat nonsensical to even think about waiting for NG. Produce now, on an API that works. Not maybe in a year, on something that may not even exist.

Share this post


Link to post
Share on other sites

OpenGL has past record of 2 failed redesigns OpenGL 2.0 and Longs Peak which was replaced by OpenGL 3.0

 

Even if OpenGL redesign suceeds this we don't for sure how much time its gonna take - 1 year or may be more

 

And yes it is worth learning OpenGL 4.3+ this time because I believe even resigned API won't completely abandon the Modern OpenGL and when you migrate in future to OpenGL NG it will be beneficial for you and I suppose migration will also be faciliated because there are lots projects running on OpenGL 4.3+ 

Edited by ammar26

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
    • By Andrey OGL_D3D
      Hi all!
      I try to use the Sun shafts effects via post process in my 3DEngine, but i have some artefacts on final image(Please see attached images).
      The effect contains the following passes:
      1) Depth scene pass;
      2) "Shafts pass" Using DepthPass Texture + RGBA BackBuffer texture.
      3) Shafts pass texture +  RGBA BackBuffer texture.
      Shafts shader for 2 pass:
      // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D DepthSampler; varying vec2 tex; #ifndef saturate float saturate(float val) {     return clamp(val, 0.0, 1.0); } #endif void main(void) {     vec2 uv = tex;     float sceneDepth = texture2D(DepthSampler, uv.xy).r;     vec4  scene        = texture2D(FullSampler, tex);     float fShaftsMask     = (1.0 - sceneDepth);     gl_FragColor = vec4( scene.xyz * saturate(sceneDepth), fShaftsMask ); } final shader:
      // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D BlurSampler; // shafts sampler varying vec4 Sun_pos; const vec4    ShaftParams = vec4(0.1,2.0,0.1,2.0); varying vec2 Tex_UV; #ifndef saturate  float saturate(float val) {     return clamp(val, 0.0, 1.0); } #endif vec4 blendSoftLight(vec4 a, vec4 b) {   vec4 c = 2.0 * a * b + a * a * (1.0 - 2.0 * b);   vec4 d = sqrt(a) * (2.0 * b - 1.0) + 2.0 * a * (1.0 - b);       // TODO: To look in Crysis what it the shit???   //return ( b < 0.5 )? c : d;   return any(lessThan(b, vec4(0.5,0.5,0.5,0.5)))? c : d; } void main(void) {     vec4 sun_pos = Sun_pos;     vec2    sunPosProj = sun_pos.xy;     //float    sign = sun_pos.w;     float    sign = 1.0;     vec2    sunVec = sunPosProj.xy - (Tex_UV.xy - vec2(0.5, 0.5));     float    sunDist = saturate(sign) * saturate( 1.0 - saturate(length(sunVec) * ShaftParams.y ));     sunVec *= ShaftParams.x * sign;     vec4 accum;     vec2 tc = Tex_UV.xy;     tc += sunVec;     accum = texture2D(BlurSampler, tc);     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.875;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.75;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.625;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.5;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.375;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.25;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.125;     accum  *= 0.25 * vec4(sunDist, sunDist, sunDist, 1.0);           accum.w += 1.0 - saturate(saturate(sign * 0.1 + 0.9));     vec4    cScreen = texture2D(FullSampler, Tex_UV.xy);           vec4    cSunShafts = accum;     float fShaftsMask = saturate(1.00001 - cSunShafts.w) * ShaftParams.z * 2.0;              float fBlend = cSunShafts.w;     vec4 sunColor = vec4(0.9, 0.8, 0.6, 1.0);     accum =  cScreen + cSunShafts.xyzz * ShaftParams.w * sunColor * (1.0 - cScreen);     accum = blendSoftLight(accum, sunColor * fShaftsMask * 0.5 + 0.5);     gl_FragColor = accum; } Demo project:
      Demo Project
      Shaders for postprocess Shaders/SunShaft/
      What i do wrong ?
      Thanks!
       


  • Advertisement