Sign in to follow this  
TapewormTuna

OpenGL What happened to Longs Peak?

Recommended Posts

TapewormTuna    253

For the uninformed, back in ~2007 the Khronos Group made a big revision to OpenGL called "Longs Peak" but killed it off in favor of what is now OpenGL 3.0. I read about it a while back and forgot about it, but I recently was reminded of it and I'm curious as to why it was killed off.

There are still quite a few interesting articles and documents explaining the changes, such as this slide from GDC 2007 https://www.khronos.org/assets/uploads/developers/library/gdc_2007/OpenGL/A-peek-inside-OpenGL-Longs-Peak.pdf. What I found really interesting was that one of the reasons for changing the API was because OpenGL hasn't caught up with current hardware (2007 "current").

Isn't that why Vulkan was created? I know the two APIs are very different, but how would Longs Peak have compared to the newer APIs? It seems like they had an idea for this wonderful new API but killed it before it ever got released. Why?

Share this post


Link to post
Share on other sites
frob    44916

Looking through that page, it looks like most of those have been part of the various updates.

OGL 3.0 deprecated many of the features the paper said should be eliminated, like fixed-function vertex and pixel shaders, client-side arrays, and unusual/ancient pixel formats. They also brought in a bunch of features it recommended.

3.1 added the buffer objects and buffer textures, 3.2 brought in more of the shader functionality the paper talked about, etc.

Pulling the paper's goals as a list, it looks like all of them (or nearly all of them) are part of the standard by now. Many of the features have been added and adjusted multiple times since that paper came out:

  • Arrays and buffers on card, not client (3.0, 3.1, 4.0, 4.3)
  • Geometry storing on card (3.2, 4.0, 4.3)
  • Eliminate fixed-function vertex processing, hardware support for everything (3.0, 3.2, 3.3, 4.3)
  • Eliminate fixed-function fragment shading, hardware support for everything (3.0, 3.1, 3.3, 4.3)
  • All buffers on the card, not client (3.0, 3.1, 4.3, 4.4)
  • Allow editing of objects on card (3.1, 3.2, 4.2, 4.4)
  • Allow instancing (3.1, 3.3, 4.0)
  • Allow reusable state objects (3.3, 4.0, 4.1, 4.5)
  • Buffers for images,sync objects, query objects, VBO/PBO objects (3.0, 3.1, 3.3, 4.0, 4.1)
  • etc.

Share this post


Link to post
Share on other sites
mhagain    13430

Just to clarify one thing: buffer objects are frequently referred to as though they were a new GL feature but in fact they're not; they date all the way back to OpenGL 1.5 and the GL_ARB_vertex_buffer_object extension.  This is well before the Longs Peak plans and what GL 3.x did that was new was make using them mandatory in core contexts.

The major missing feature from Longs Peak remains the object model; or at least part of it.  Much of it is there but GL still suffers from atrocious type-safety which a well-specified object model should have fixed.

Share this post


Link to post
Share on other sites
TapewormTuna    253

While it's good to see that they added many of these features into OpenGL since, why did they scrap the new spec and wait to implement these features later?

 

The major missing feature from Longs Peak remains the object model; or at least part of it.  Much of it is there but GL still suffers from atrocious type-safety which a well-specified object model should have fixed.

The object model would have been a major change for the better. I don't understand why they thought it was a good idea to not include it. OpenGL's current object system feels like a bunch of hacks thrown on a 25 year old API.

Share this post


Link to post
Share on other sites
Hodgman    51234

The biggest "feature" of longs peak is that it would've been a fresh API / broken backwards compatibility with existing GL code.

GL3 half-assed that by deprecating old API interfaces, but Windows drivers kept supporting them anyway (Mac actually killed them off, yay!).

And yes, Vulkan has actually achieved this goal by coming up with a new API from scratch (well, from Mantle), so there's not three decades of GL cruft hanging off of it.

Share this post


Link to post
Share on other sites
mhagain    13430

...why did they scrap the new spec and wait to implement these features later?

 

There's never been an official statement about this.

If you were there at the time, the way it played out was that the ARB were making all the right noises about Longs Peak and everybody was excited, keen and supportive.  Then they announced "some unresolved issues that we want addressed before we feel comfortable releasing a specification", went into a total media blackout for some time, and eventually emerged with the OpenGL 3.0 we know of today.

The closest to an answer you'll get is this post on the OpenGL forums:

What happened to Longs Peak?

In January 2008 the ARB decided to change directions. At that point it had become clear that doing Longs Peak, although a great effort, wasn't going to happen. We ran into details that we couldn't resolve cleanly in a timely manner. For example, state objects. The idea there is that of all state is immutable. But when we were deciding where to put some of the sample ops state, we ran into issues. If the alpha test is immutable, is the alpha ref value also? If we do so, what does this mean to a developer? How many (100s?) of objects does a developer need to manage? Should we split sample ops state into more than one object? Those kind of issues were taking a lot of time to decide.

Furthermore, the "opt in" method in Longs Peak to move an existing application forward has its pros and cons. The model of creating another context to write Longs Peak code in is very clean. It'll work great for anyone who doesn't have a large code base that they want to move forward incrementally. I suspect that that is most of the developers that are active in this forum. However, there are a class of developers for which this would have been a, potentially very large, burden. This clearly is a controversial topic, and has its share of proponents and opponents.

While we were discussing this, the clock didn't stop ticking. The OpenGL API *has to* provide access to the latest graphics hardware features. OpenGL wasn't doing that anymore in a timely manner. OpenGL was behind in features. All graphics hardware vendors have been shipping hardware with many more features available than OpenGL was exposing. Yes, vendor specific extensions were and are available to fill the gap, but that is not the same as having a core API including those new features. An API that does not expose hardware capabilities is a dead API.

Thus, prioritization was needed, and we made several decisons.

1) We set a goal of exposing hardware functionality of the latest generations of hardware by this Siggraph. Hence, the OpenGL 3.0 and GLSL 1.30 API you guys all seem to love

2) We decided on a formal mechanism to remove functionality from the API. We fully realize that the existing API has been around for a long time, has cruft and is inconsistent with its treatment of objects (how many object models are in the OpenGL 3.0 spec? You count). In its shortest form, removing functionality is a two-step process. First, functionality will be marked "deprecated" in the specification. A long list of functionality is already marked deprecated in the OpenGL 3.0 spec. Second, a future revision of the core spec will actually remove the deprecated functionality. After that, the ARB has options. It can decide to do a third step, and fold some of the removed functionality into a profile. Profiles are optional to implement (more below) and its functionality might still be very important to a sub-set of the OpenGL market. Note that we also decided that new functionality does not have to, and will likely not work with, deprecated functionality. That will make the spec easier to write, read and understand, and drivers easier to implement.

3) We decided to provide a way to create a forward-compatible context. That is an OpenGL 3.0 context with all deprecated features removed. Giving you, as a developer, a preview of what a next version of OpenGL might look like. Drivers can take advantage of this, and might be able to optimize certain code paths in the forward-compatible context only. This is described in the WGL_ARB_create_context extension spec.

4) We decided to have a formal way of defining profiles. During the Longs Peak design phase, we ran into disagreement over what features to remove from the API. Longs Peak removed quite a lot of features as you might remember. Not coincidentally, most of those features are marked deprecated in OpenGL 3.0. The disagreements happened because of different market needs. For some markets a feature is essential, and removing it will cause issues, whereas for another market it is not. We discovered we couldn't do one API to serve all. A profile encapsulates functionality needed to meet the needs of a particular market. Conformant OpenGL products may implement one or more profiles. A profile is by definition a subset of the whole core specification. The core OpenGL specification will contain all functionality, including what is in a profile, in a coherently designed whole. Profiles simply enable products for certain markets to not ship functionality that is not relevant to those markets in a well defined way. Only the ARB may define profiles, individual vendors may not (this in contrast to extensions).

5) We will keep working on object model issues. Yes, this work has been put on the back burner to get OpenGL 3.0 done, but we have picked that work up again. One of the early results of this is that we will work on folding object model improvements into the core in a more incremental manner.

6) We decided to provide functionality, where possible, as extensions to OpenGL 2.1. Any OpenGL 3.0 feature that does not require OpenGL 3.0 hardware is also available in extension form to OpenGL 2.1. The idea here is that new functionality on older hardware enables software vendors to provide upgrades to their existing users.

7) We decided that OpenGL is not going to evolve into a general GPU compute API. In the last two years or so compute using a GPU and a CPU has taken off, in fact is exploding. Khronos has recognized this and is on a fast track to define and release OpenCL, the open standard for compute programming. OpenGL and OpenCL will be able to share data, like buffer objects, in an efficient manner.

There are many good ideas in Longs Peak. They are not lost. We would be stupid to ignore it. We spent almost two years on it, and a lot of good stuff was designed. There is a desire to work on object model issues in the ARB, and we recently started doing that again. Did you know that you have no guarantee that if you change properties of a texture or render buffer attached to a framebuffer object that the framebuffer object will actually notice? It has to notice it, otherwise your next rendering command will not work. Each vendor's implementation deals with this case a bit differently. If you throw in multiple contexts in the mix, this becomes an even more interesting issue. The ARB wants to do object model improvements right the first time. We can't afford to do it wrong. At the same time, the ARB will work on exposing new hardware functionality in a timely manner.

 

What's clear about this is that the job of specifying Longs Peak was taking too long, design-by-committee wasn't working and OpenGL was once again at risk of being overtaken even further.  So something had to be done, and what was done was a rush-release of OpenGL 3.0 with promises to fold in much of the Longs Peak features/improvements in future releases, which is what eventually happened.

At the time there were other rumours, one of which was that CAD interests on the ARB killed it, because their CAD programs were using the old/crufty behaviours that Longs peak threatened to remove, and they didn't want to upgrade them.  The statement quoted that "there are a class of developers for which this would have been a, potentially very large, burden" suggests CAD vendors for sure, but IMO this rumour was never 100% credible, because of course CAD vendors would have always had the option to just continue using the older API (just as even today you can still write an OpenGL 1.1 program).  A second rumour I recall is that one of the GPU vendors killed it for some non-specific reasons.

On balance of probability the situation I describe above (spec taking too long and competition getting further ahead again) seems most likely to me.

Edited by mhagain

Share this post


Link to post
Share on other sites
_the_phantom_    11250

At the time there were other rumours, one of which was that CAD interests on the ARB killed it, because their CAD programs were using the old/crufty behaviours that Longs peak threatened to remove, and they didn't want to upgrade them.  The statement quoted that "there are a class of developers for which this would have been a, potentially very large, burden" suggests CAD vendors for sure, but IMO this rumour was never 100% credible, because of course CAD vendors would have always had the option to just continue using the older API (just as even today you can still write an OpenGL 1.1 program).  A second rumour I recall is that one of the GPU vendors killed it for some non-specific reasons.


I can tell you for a fact the CAD companies didn't kill it - I was having a conversation with someone on the ARB at the time who confirmed that.

My working theory is that it was Apple and possibly Blizzard who did for it - AMD and NV were very much on board so I doubt they killed it... Intel is a maybe but feels unlikely.

Share this post


Link to post
Share on other sites
mhagain    13430

I recall hearing Blizzard mentioned as a possible villain of the piece too.

Amusingly, the one company we can be absolutely certain didn't kill it is Microsoft; they had long since ceased involvement with OpenGL by then.

Share this post


Link to post
Share on other sites
TapewormTuna    253
On 4/12/2017 at 0:46 PM, Oberon_Command said:

Can anyone explain why Blizzard (or Apple, for that matter) would be the culprit, though? Why would a game company push to kill Long Peaks?

Late response, but to answer your question: Apple does not like open standards. I'm going to guess they had plans to build their own proprietary API like Microsoft did, and with the recent surge of low-level APIs they had an opportunity to do just that.

I'm not sure what Blizzard could have possibly had against it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now