Archived

This topic is now archived and is closed to further replies.

dunkel3d

OpenGL OpenGL 2.0

Recommended Posts

OpenGL 2.0 will not be released in the near future. The ARB decided the new shading lanuage will not be added as a core feature but rather as an extension. So the next version of OpenGL will be 1.5

For more info:

http://www.opengl.org/developers/about/arb/notes/meeting_note_2003-06-10.html

Share this post


Link to post
Share on other sites

They''re not going to add it as a core feature? man, are they trying to kill the API off as far as it''s use in games? Talk about handing MS the gold platter with your ass on it. Damn, perhaps I should just go and learn DX instead....

Share this post


Link to post
Share on other sites
quote:
Original post by microbe

They''re not going to add it as a core feature? man, are they trying to kill the API off as far as it''s use in games? Talk about handing MS the gold platter with your ass on it. Damn, perhaps I should just go and learn DX instead....


Extensions is a fast and efficient method to add new features...the only revolution in the last years is given by NVIDIA shaders/per fragment programs/... and they are supported by OpenGL via extensions.
With extensions you can ask directly to the driver what it can do without wait for a new API release.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
you can''t. you have to get the function pointers for opengl versions above 1.1 like they were extensions.

Share this post


Link to post
Share on other sites
see the FAQ for libs you can use to make it easier on your life

microbe:
in the windows world everything OGL function since 1.1 is an extension anyways.
ATI already have a beta of the new shading lang iirc, and i dare say Nvidia aint far behind, so there will be no delay on it getting into the drivers.

Share this post


Link to post
Share on other sites
quote:
Original post by _the_phantom_
ATI already have a beta of the new shading lang iirc, and i dare say Nvidia aint far behind, so there will be no delay on it getting into the drivers.


NO DELAY ? What are you talking about ? MS delayed all the stuff and the ARB isn''t able to move their asses for a very long time now. I had to rewrite my shadercode three times due to vendor-specific renderpaths. I hope that GLSlang will make an end to it, but talking about "no delays" seems a bit out of topic here.
OpenGL must get more ARB-Style with extensions that work with more hardware than ATI and NV.
On the last meeting my boss decided to drop the support for OpenGL and rewrite our engine for DX9. That sucks very bad, because the engine was developed under OpenGL for 3 years now, but i can understand this decision, because OpenGL is too expensive to develop for. Rewriting very Shader one time after another makes things too complicated. The coders at our company also have another tasks, then to care about every piece of hardware seperatly.
OpenGL will die, if 1.5 will not make many things different. But i see it this way : When GLSlang arrives, OpenGL still lacks on support for vendor-independent fbuffers. And this will go on and on.

cu
Tom

P.S.: The onlyone laughing is MS and this makes me cry .

Share this post


Link to post
Share on other sites
the key part of your arugment is that ''MS delayed stuff'' and lo and behold, MS leave the ARB and suddenly things start happening faster.

As for fbuffers atm afaik there isnt even a vender dependant version of them and, again afaik, there is only one card which supports them in hardware anyway.

and OpenGL wont die until it stops being used for the many comerical things it used for and until MS port DX to linux and Mac to name two OSes which use it for its 3d gfx API

And as Benjamin bunny pointed out, both ARB fragment and vertex program extensions have been avaible for a while now, so it seems your boss called for a switch without just cause imo

Share this post


Link to post
Share on other sites
To get back to the original question...

At SIGGRAPH, members of the ARB told me that they thought that 2.0 (which is really only meaningful from a marketing standpoint) could be done within 6 months, but that it''d probably wait until next summer, since they''re currently on a yearly update cycle (and have been for the past 3 years). However, I''d throw those numbers into question due to the fact that part of the justification of making GLSlang an extension rather than part of the core was that they wanted vendors to be able to get field experience working with it to work out the kinks before they started requiring support for it. The problem is that at the time, 1.5 was supposedly done and they were just finishing writing up the spec. It''s been 3 months, and no spec, and thus no field experience working with GLSlang.

So I think it''s safe to say that the six month estimate can be thrown out, but I suspect that at next SIGGRAPH, we''ll see 2.0, rather than 1.6.

Share this post


Link to post
Share on other sites
quote:
Original post by Myopic Rhino
To get back to the original question...

At SIGGRAPH, members of the ARB told me that they thought that 2.0 (which is really only meaningful from a marketing standpoint) could be done within 6 months, but that it'd probably wait until next summer.....It's been 3 months, and no spec, and thus no field experience working with GLSlang.

So I think it's safe to say that the six month estimate can be thrown out, but I suspect that at next SIGGRAPH, we'll see 2.0, rather than 1.6.


If it does come out by next Siggraph, will "2.0" still be meaningful only from a marketing standpoint?

newbie
S



[edited by - dunkel3d on October 22, 2003 2:51:39 AM]

Share this post


Link to post
Share on other sites
Think about it this way: as a developer, if the features are the same either way, would it make a difference to you if they called it 1.6 or 2.0? The reason people are interested in 2.0 is because 3D Labs hyped it as a huge revolution in the design of OpenGL. As it stands now, the features that are actually going to be included in OpenGL 2.0 aren''t the same as those originally proposed by 3D Labs. A few things have been removed (or at least, aren''t on the table for inclusion right now), and other things have been changed. Don''t get me wrong, it''s still going to represent a substantial change, but the whole idea of 2.0 as some world-changing, Direct3D-killing revolution is just marketing buzz.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I don''t see why everyones panties are in an uproar.
The extension mechanism seems to be working and the version number is just that, a number.. I say give them time, if they rush it out it will end up being lesser quality, like any version of DX that gets rushed out by MS and needs 500 patches to run properly.

openGL will never die so you M$-employed dreamers can stop talking about that right now..

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
"maybe, just maybe, someone would make a new dll/libs/h files for windows "
Sure.. microsoft inviting competition? That would be the day!

Share this post


Link to post
Share on other sites

  • Announcements

  • Forum Statistics

    • Total Topics
      628359
    • Total Posts
      2982259
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now