Sign in to follow this  
MilfredCubicleX

OpenGL OpenGL 4.0

Recommended Posts

It looks like the spec for OpenGL 4.0 has come out: http://www.opengl.org/registry/ Reading through it, it looks like the most notable change is the addition of "Tessellation Control Processor" and "Tessellation Evaluation Processor". From the GLSL spec:
Quote:
The tessellation control processor is a programmable unit that operates on a patch of incoming vertices and their associated data, emitting a new output patch. Compilation units written in the OpenGL Shading Language to run on this processor are called tessellation control shaders. When a complete set of tessellation control shaders are compiled and linked, they result in a tessellation control shader executable that runs on the tessellation control processor. The tessellation control shader is invoked for each vertex of the output patch. Each invocation can read the attributes of any vertex in the input or output patches, but can only write per-vertex attributes for the corresponding output patch vertex. The shader invocations collectively produce a set of per-patch attributes for the output patch. After all tessellation control shader invocations have completed, the output vertices and per-patch attributes are assembled to form a patch to be used by subsequent pipeline stages. Tessellation control shader invocation run mostly independently, with undefined relative execution order. However, the built-in function barrier() can be used to control execution order by synchronizing invocations, effectively dividing tessellation control shader execution into a set of phases. Tessellation control shaders will get undefined results if one invocation reads a per-vertex or per-patch attribute written by another invocation at any point during the same phase, or if two invocations attempt to write different values to the same per-patch output in a single phase. The tessellation evaluation processor is a programmable unit that evaluates the position and other attributes of a vertex generated by the tessellation primitive generator, using a patch of incoming vertices and their associated data. Compilation units written in the OpenGL Shading Language to run on this processor are called tessellation evaluation shaders. When a complete set of tessellation evaluation shaders are compiled and linked, they result in a tessellation evaluation shader executable that runs on the tessellation evaluation processor. Each invocation of the tessellation evaluation executable computes the position and attributes of a single vertex generated by the tessellation primitive generator. The executable can read the attributes of any vertex in the input patch, plus the tessellation coordinate, which is the relative location of the vertex in the primitive being tessellated. The executable writes the position and other attributes of the vertex.
I'm not sure that I see how this would be very useful. Any thoughts?

Share this post


Link to post
Share on other sites
swiftcoder    18432
Most of the improvements in GL 4.0 are incremental. Apart from the obvious inclusion of tessellation support (which is perhaps the most important DX 11 feature), it includes:

- Transform Feedback Objects
- Sampler Objects
- Cube Map Texture Arrays
- Block Sampling (textureGather)
- Multisample support in fragment shaders

And undoubtedly other features I have missed on a cursory inspection.

Share this post


Link to post
Share on other sites
karwosts    840
Quote:

I'm not sure that I see how this would be very useful. Any thoughts?


Tessellation is a really cool feature, it allows for dynamic LOD using only one source mesh and one displacement map.

Check out this video around 1:30, its a nvidia guy describing exactly what you can get with tessellation with a good video explanation. This is one of the big features of DX11.

YouTube Tessellation Video

Share this post


Link to post
Share on other sites
othello    100
Wow, I'm still on version 2.1 :p. Doing tesselation on the GPU sounds pretty cool; I'm sure it can be adapted for several interesting uses like (as mentioned above) LOD's on meshes. I guess it's time to read up on tesselation for me then.

Share this post


Link to post
Share on other sites
Toji    535
While this is nice and all, does anyone get the feeling that KHRONOS has officially given up pretense of introducing new features and is now content to just follow DirectX around? Kinda sad, considering how it wasn't that long ago that they were the pack leader.

Beyond that, they're missing one big feature that (for me anyway) is the primary draw of DirectX 11: Multi-threading. The whole API has been designed around segregating thread-safe and non-thread-safe calls, which is a huge plus even if you don't intend to target DX11 level hardware. As such, just adding a few new features again is a bit of a letdown. [Insert "Should have been called OpenGL 2.x" joke here.]

Still, at the very least this exposes the current hardware capabilities to non-Microsoft platforms, so I guess I can't gripe too much...

Share this post


Link to post
Share on other sites
Ravyne    14300
Its not the place of OpenGL to "introduce new features" though -- that's what the extensions are for. OpenGL only *exposes* commonality between the graphics vendors, and does not seek to *impose* commonality in the way that Direct3D does -- which, even still, is essentially decided by a committee of hardware and software vendors.

I do agree on the multithreading issue, however, but being cross-platform does make this a significantly more complicated issue than if they only had Windows to deal with. I would like it if threading were a major focus for OpenGL 5.x though.

You've gotta hand it to Khronos for keeping things moving along though, in comparison to the OpenGL releases of old.

Share this post


Link to post
Share on other sites
rbarris    635
I think a more interesting question is, "given a list of what the current crop of GPU's are capable of, where is OpenGL not exposing that functionality?" which is in some sense what has driven the feature set for 3.0, 3.1, 3.2, and 3.3 / 4.0 - there have been basically two GL updates per year for the last two years straight.

It was only a couple of years ago when DX10 was shipping and GL 3.0 was not. DX10 was doing a better job of exposing the silicon features on a timely basis, and GL was not, IMO. If you see parallels between them now, that's not unreasonable, they are on top of the same hardware after all.

GL can't dream up features that the hardware doesn't have (aside from API streamlining type moves), in the best case GL implementations arrive closer to hardware availability in the market. NVIDIA is claiming they will have GL4 available on Fermi at ship, this is good.

When you're behind, you can either close the gap, maintain position, or give up. The observation that Khronos and the OpenGL group is making an ongoing effort to close the gap - whether you feel that gap was "vs. DX" or "vs. hardware capability" - isn't bad news.

A position I have been vocal about is, as long as there is a feature gap against hardware, don't try to reboot the whole API. Timely hardware exposure is Job 1, and this is reflected in the last four releases.

We could be getting close to a corner where the immense pressure to catch up on hardware exposure is easing, and energy can go into other areas, or simply covering issues that have been missed to date. One example of that in GL 3.3 is the separation of sampler state from texture objects - it was an old feature request from DX-savvy developers, and there was finally enough breathing room in the schedule to do it, it's in there.

The multi threading discussion is interesting, there are use cases where async operations have always been possible on GL using a secondary shared context, and I am sure there are new use cases worth looking at that could affect future GL revs where a secondary context doesn't cut it. Look closely at the new ARB-sync stuff, it goes beyond fences and flushes..

There is a feedback thread on OpenGL.org forum w.r.t. 4.0 - just as was done for 3.0 / 3.1 / 3.2 - it's a good place to express needs to the working group. The feedback loop has been much more effective with the last couple of releases in particular, and it always helps to be specific about what you want that would help your app.

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=postlist&Board=12&page=1



In summary, 2010 != 2007.

Share this post


Link to post
Share on other sites
Enalis    333
Hah, all the posts make so much sense now, thank you! I kinda feel dumb as I've been tinkering with OpenGL for far too long to not know this. Especially since I've used this plenty of times. I guess GLEW always hid the extension loading from me.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
  • Popular Now