Sign in to follow this  
Followers 0
TapewormTuna

OpenGL
What happened to Longs Peak?

10 posts in this topic

For the uninformed, back in ~2007 the Khronos Group made a big revision to OpenGL called "Longs Peak" but killed it off in favor of what is now OpenGL 3.0. I read about it a while back and forgot about it, but I recently was reminded of it and I'm curious as to why it was killed off.

There are still quite a few interesting articles and documents explaining the changes, such as this slide from GDC 2007 https://www.khronos.org/assets/uploads/developers/library/gdc_2007/OpenGL/A-peek-inside-OpenGL-Longs-Peak.pdf. What I found really interesting was that one of the reasons for changing the API was because OpenGL hasn't caught up with current hardware (2007 "current").

Isn't that why Vulkan was created? I know the two APIs are very different, but how would Longs Peak have compared to the newer APIs? It seems like they had an idea for this wonderful new API but killed it before it ever got released. Why?

1

Share this post


Link to post
Share on other sites

Looking through that page, it looks like most of those have been part of the various updates.

OGL 3.0 deprecated many of the features the paper said should be eliminated, like fixed-function vertex and pixel shaders, client-side arrays, and unusual/ancient pixel formats. They also brought in a bunch of features it recommended.

3.1 added the buffer objects and buffer textures, 3.2 brought in more of the shader functionality the paper talked about, etc.

Pulling the paper's goals as a list, it looks like all of them (or nearly all of them) are part of the standard by now. Many of the features have been added and adjusted multiple times since that paper came out:

  • Arrays and buffers on card, not client (3.0, 3.1, 4.0, 4.3)
  • Geometry storing on card (3.2, 4.0, 4.3)
  • Eliminate fixed-function vertex processing, hardware support for everything (3.0, 3.2, 3.3, 4.3)
  • Eliminate fixed-function fragment shading, hardware support for everything (3.0, 3.1, 3.3, 4.3)
  • All buffers on the card, not client (3.0, 3.1, 4.3, 4.4)
  • Allow editing of objects on card (3.1, 3.2, 4.2, 4.4)
  • Allow instancing (3.1, 3.3, 4.0)
  • Allow reusable state objects (3.3, 4.0, 4.1, 4.5)
  • Buffers for images,sync objects, query objects, VBO/PBO objects (3.0, 3.1, 3.3, 4.0, 4.1)
  • etc.
1

Share this post


Link to post
Share on other sites

Just to clarify one thing: buffer objects are frequently referred to as though they were a new GL feature but in fact they're not; they date all the way back to OpenGL 1.5 and the GL_ARB_vertex_buffer_object extension.  This is well before the Longs Peak plans and what GL 3.x did that was new was make using them mandatory in core contexts.

The major missing feature from Longs Peak remains the object model; or at least part of it.  Much of it is there but GL still suffers from atrocious type-safety which a well-specified object model should have fixed.

1

Share this post


Link to post
Share on other sites

While it's good to see that they added many of these features into OpenGL since, why did they scrap the new spec and wait to implement these features later?

 

The major missing feature from Longs Peak remains the object model; or at least part of it.  Much of it is there but GL still suffers from atrocious type-safety which a well-specified object model should have fixed.

The object model would have been a major change for the better. I don't understand why they thought it was a good idea to not include it. OpenGL's current object system feels like a bunch of hacks thrown on a 25 year old API.

0

Share this post


Link to post
Share on other sites

The biggest "feature" of longs peak is that it would've been a fresh API / broken backwards compatibility with existing GL code.

GL3 half-assed that by deprecating old API interfaces, but Windows drivers kept supporting them anyway (Mac actually killed them off, yay!).

And yes, Vulkan has actually achieved this goal by coming up with a new API from scratch (well, from Mantle), so there's not three decades of GL cruft hanging off of it.

0

Share this post


Link to post
Share on other sites

Posted (edited)

...why did they scrap the new spec and wait to implement these features later?

 

There's never been an official statement about this.

If you were there at the time, the way it played out was that the ARB were making all the right noises about Longs Peak and everybody was excited, keen and supportive.  Then they announced "some unresolved issues that we want addressed before we feel comfortable releasing a specification", went into a total media blackout for some time, and eventually emerged with the OpenGL 3.0 we know of today.

The closest to an answer you'll get is this post on the OpenGL forums:

What happened to Longs Peak?

In January 2008 the ARB decided to change directions. At that point it had become clear that doing Longs Peak, although a great effort, wasn't going to happen. We ran into details that we couldn't resolve cleanly in a timely manner. For example, state objects. The idea there is that of all state is immutable. But when we were deciding where to put some of the sample ops state, we ran into issues. If the alpha test is immutable, is the alpha ref value also? If we do so, what does this mean to a developer? How many (100s?) of objects does a developer need to manage? Should we split sample ops state into more than one object? Those kind of issues were taking a lot of time to decide.

Furthermore, the "opt in" method in Longs Peak to move an existing application forward has its pros and cons. The model of creating another context to write Longs Peak code in is very clean. It'll work great for anyone who doesn't have a large code base that they want to move forward incrementally. I suspect that that is most of the developers that are active in this forum. However, there are a class of developers for which this would have been a, potentially very large, burden. This clearly is a controversial topic, and has its share of proponents and opponents.

While we were discussing this, the clock didn't stop ticking. The OpenGL API *has to* provide access to the latest graphics hardware features. OpenGL wasn't doing that anymore in a timely manner. OpenGL was behind in features. All graphics hardware vendors have been shipping hardware with many more features available than OpenGL was exposing. Yes, vendor specific extensions were and are available to fill the gap, but that is not the same as having a core API including those new features. An API that does not expose hardware capabilities is a dead API.

Thus, prioritization was needed, and we made several decisons.

1) We set a goal of exposing hardware functionality of the latest generations of hardware by this Siggraph. Hence, the OpenGL 3.0 and GLSL 1.30 API you guys all seem to love

2) We decided on a formal mechanism to remove functionality from the API. We fully realize that the existing API has been around for a long time, has cruft and is inconsistent with its treatment of objects (how many object models are in the OpenGL 3.0 spec? You count). In its shortest form, removing functionality is a two-step process. First, functionality will be marked "deprecated" in the specification. A long list of functionality is already marked deprecated in the OpenGL 3.0 spec. Second, a future revision of the core spec will actually remove the deprecated functionality. After that, the ARB has options. It can decide to do a third step, and fold some of the removed functionality into a profile. Profiles are optional to implement (more below) and its functionality might still be very important to a sub-set of the OpenGL market. Note that we also decided that new functionality does not have to, and will likely not work with, deprecated functionality. That will make the spec easier to write, read and understand, and drivers easier to implement.

3) We decided to provide a way to create a forward-compatible context. That is an OpenGL 3.0 context with all deprecated features removed. Giving you, as a developer, a preview of what a next version of OpenGL might look like. Drivers can take advantage of this, and might be able to optimize certain code paths in the forward-compatible context only. This is described in the WGL_ARB_create_context extension spec.

4) We decided to have a formal way of defining profiles. During the Longs Peak design phase, we ran into disagreement over what features to remove from the API. Longs Peak removed quite a lot of features as you might remember. Not coincidentally, most of those features are marked deprecated in OpenGL 3.0. The disagreements happened because of different market needs. For some markets a feature is essential, and removing it will cause issues, whereas for another market it is not. We discovered we couldn't do one API to serve all. A profile encapsulates functionality needed to meet the needs of a particular market. Conformant OpenGL products may implement one or more profiles. A profile is by definition a subset of the whole core specification. The core OpenGL specification will contain all functionality, including what is in a profile, in a coherently designed whole. Profiles simply enable products for certain markets to not ship functionality that is not relevant to those markets in a well defined way. Only the ARB may define profiles, individual vendors may not (this in contrast to extensions).

5) We will keep working on object model issues. Yes, this work has been put on the back burner to get OpenGL 3.0 done, but we have picked that work up again. One of the early results of this is that we will work on folding object model improvements into the core in a more incremental manner.

6) We decided to provide functionality, where possible, as extensions to OpenGL 2.1. Any OpenGL 3.0 feature that does not require OpenGL 3.0 hardware is also available in extension form to OpenGL 2.1. The idea here is that new functionality on older hardware enables software vendors to provide upgrades to their existing users.

7) We decided that OpenGL is not going to evolve into a general GPU compute API. In the last two years or so compute using a GPU and a CPU has taken off, in fact is exploding. Khronos has recognized this and is on a fast track to define and release OpenCL, the open standard for compute programming. OpenGL and OpenCL will be able to share data, like buffer objects, in an efficient manner.

There are many good ideas in Longs Peak. They are not lost. We would be stupid to ignore it. We spent almost two years on it, and a lot of good stuff was designed. There is a desire to work on object model issues in the ARB, and we recently started doing that again. Did you know that you have no guarantee that if you change properties of a texture or render buffer attached to a framebuffer object that the framebuffer object will actually notice? It has to notice it, otherwise your next rendering command will not work. Each vendor's implementation deals with this case a bit differently. If you throw in multiple contexts in the mix, this becomes an even more interesting issue. The ARB wants to do object model improvements right the first time. We can't afford to do it wrong. At the same time, the ARB will work on exposing new hardware functionality in a timely manner.

 

What's clear about this is that the job of specifying Longs Peak was taking too long, design-by-committee wasn't working and OpenGL was once again at risk of being overtaken even further.  So something had to be done, and what was done was a rush-release of OpenGL 3.0 with promises to fold in much of the Longs Peak features/improvements in future releases, which is what eventually happened.

At the time there were other rumours, one of which was that CAD interests on the ARB killed it, because their CAD programs were using the old/crufty behaviours that Longs peak threatened to remove, and they didn't want to upgrade them.  The statement quoted that "there are a class of developers for which this would have been a, potentially very large, burden" suggests CAD vendors for sure, but IMO this rumour was never 100% credible, because of course CAD vendors would have always had the option to just continue using the older API (just as even today you can still write an OpenGL 1.1 program).  A second rumour I recall is that one of the GPU vendors killed it for some non-specific reasons.

On balance of probability the situation I describe above (spec taking too long and competition getting further ahead again) seems most likely to me.

Edited by mhagain
1

Share this post


Link to post
Share on other sites

At the time there were other rumours, one of which was that CAD interests on the ARB killed it, because their CAD programs were using the old/crufty behaviours that Longs peak threatened to remove, and they didn't want to upgrade them.  The statement quoted that "there are a class of developers for which this would have been a, potentially very large, burden" suggests CAD vendors for sure, but IMO this rumour was never 100% credible, because of course CAD vendors would have always had the option to just continue using the older API (just as even today you can still write an OpenGL 1.1 program).  A second rumour I recall is that one of the GPU vendors killed it for some non-specific reasons.


I can tell you for a fact the CAD companies didn't kill it - I was having a conversation with someone on the ARB at the time who confirmed that.

My working theory is that it was Apple and possibly Blizzard who did for it - AMD and NV were very much on board so I doubt they killed it... Intel is a maybe but feels unlikely.
0

Share this post


Link to post
Share on other sites

I recall hearing Blizzard mentioned as a possible villain of the piece too.

Amusingly, the one company we can be absolutely certain didn't kill it is Microsoft; they had long since ceased involvement with OpenGL by then.

0

Share this post


Link to post
Share on other sites
Can anyone explain why Blizzard (or Apple, for that matter) would be the culprit, though? Why would a game company push to kill Long Peaks?
0

Share this post


Link to post
Share on other sites
On 4/12/2017 at 0:46 PM, Oberon_Command said:

Can anyone explain why Blizzard (or Apple, for that matter) would be the culprit, though? Why would a game company push to kill Long Peaks?

Late response, but to answer your question: Apple does not like open standards. I'm going to guess they had plans to build their own proprietary API like Microsoft did, and with the recent surge of low-level APIs they had an opportunity to do just that.

I'm not sure what Blizzard could have possibly had against it.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Freezee
      Hi, I'm 16 years old (nearly 17) french guy who loves coding, and pizzas, and music, and a lot of other things...
      I started learning some programming languages 6 years ago, and always failed to achieve something with them. Then I decided to re-try Java 2 years ago, and it went pretty well. So well that from this time I did not stopped programming. I really started to dig into C++ a year ago because I wanted lower level programming skills, with one specific goal: create games. Unfortunately I always overestimate myself and my ideas, and I've not been able to create a single real game because of my lack of experience in that specific domain. So I'm looking for a 3D FPS game project (multiplayer would be great too) to see how that kind of project is managed, and to finally be able to create something. I would like for once to work with other people on the same thing, I think it could really help me to help back the others. I have a lot of free time right now and I'm ready to spend some (if not a lot) into a project.
      I learned a lot of C++ features when I started, but I feel like I'm missing a lot of other features and I want to learn them on something useful.
      I really prefer not working on a project with a pre-used game engine (GM, UE, Unity, ...) because for me the most interesting part is what happens at the lowest programming level of a game. I learned basics of modern OpenGL so if there is a graphical engine to improve, I can work on it. I'm also very interested into working on the game engine structure, and on implementing a scripting language if it's needed. If the game is multiplayer, I will not guarantee that I could really work on that (because I really don't know a lot about networking) but I'll try my best to continue learning things and maybe work on that too.
      If you're interested, feel free to contact me on Discord: Freezee#2283. If you don't have Discord, reply back a way to contact you
    • By Jon Alma
      Some time ago I implemented a particle system using billboarding techniques to ensure that the particles are always facing the viewer.  These billboards are always centered on one 3d coordinate.
      I would like to build on this and use billboarding as the basis for things like laser bolts and gunshots.  Here the difference is that instead of a single point particle I now have to draw a billboard between two points - the start and end of the laser bolt for example.  I appreciate that having two end points places limits on how much the billboard can be rotated to face the viewer, but I'm looking to code a best effort solution.  For the moment I am struggling to work out how to do this or find any tutorials / code examples that explain how to draw a billboard between two points ... can anyone help?
      Thanks.
    • By Sagaceil
      It's always better to fight with a bro.
    • By recp
      Hi,
      I'm working on new asset importer (https://github.com/recp/assetkit) based on COLLADA specs, the question is not about COLLADA directly
      also I'm working on a new renderer to render (https://github.com/recp/libgk) imported document.
      In the future I'll spend more time on this renderer of course, currently rendering imported (implemented parts) is enough for me
      assetkit imports COLLADA document (it will support glTF too),
      importing scene, geometries, effects/materials, 2d textures and rendering them seems working
      My actual confusion is about shaders. COLLADA has COMMON profile and GLSL... profiles,
      GLSL profile provides shaders for effects so I don't need to wory about them just compile, link, group them before render

      The problem occours in COMMON profile because I need to write shaders,
      Actually I wrote them for basic matrials and another version for 2d texture
      I would like to create multiple program but I am not sure how to split this this shader into smaller ones,

      Basic material version (only colors):
      https://github.com/recp/libgk/blob/master/src/default/shader/gk_default.frag
      Texture version:
      https://gist.github.com/recp/b0368c74c35d9d6912f524624bfbf5a3
      I used subroutines to bind materials, actually I liked it,
      In scene graph every node can have different program, and it switches between them if parentNode->program != node->program
      (I'll do scene graph optimizations e.g.  view frustum culling, grouping shaders... later)

      I'm going to implement transparency but I'm considering to create separate shaders,
      because default shader is going to be branching hell
      I can't generate shader for every node because I don't know how many node can be exist, there is no limit.
      I don't know how to write a good uber-shader for different cases:

      Here material struct:
      struct Material { ColorOrTexture emission; ColorOrTexture ambient; ColorOrTexture specular; ColorOrTexture reflective; ColorOrTexture transparent; ColorOrTexture diffuse; float shininess; float reflectivEyety; float transparency; float indexOfRefraction; }; ColorOrTexture could be color or 2d texture, if there would be single colorOrTex then I could split into two programs,
      Also I'm going to implement transparency, I am not sure how many program that I needed

      I'm considering to maintain a few default shaders for COMMON profile,
      1-no-texture, 2-one of colorOrTexture contains texture, 3-........

      Any advices in general or about how to optimize/split (if I need) these shaders which I provied as link?
      What do you think the shaders I wrote, I would like to write them without branching if posible,
      I hope I don't need to write 50+ or 100+ shaders, and 100+ default programs

      PS: These default shaders should render any document, they are not specific, they are general purpose...
             I'm compiling and linking default shaders when app launched

      Thanks
    • By CircleOfLight97
      Hi guys,
      I would like to contribute to a game project as a developer (open source possibly). I have some experiences in C/C++ in game development (perso projects). I don't know either unreal or unity but I have some knowledges in opengl, glsl and shading theory as I had some courses at university regarding to that. I have some knowledges in maths and basic in physics. I know a little how to use blender to do modelling, texturing and simple game assets (no characters, no animation no skinning/rigging). I have no game preferences but I like aventure game, dungeon crawler, platformers, randomly generated things. I know these kind of projects involve a lot of time and I'd be really to work on it but if there are no cleary defined specific design goals/stories/gameplay mechanics I would like to not be part of it x) and I would rather prefer a smaller but well defined project to work on that a huge and not 'finishable' one.
      CircleOfLight97