Archived

This topic is now archived and is closed to further replies.

bjmumblingmiles

OpenGL OpenGL 1.5's "Shader Language"

Recommended Posts

quote:
Suppose, instead of taking months to create, the breathtaking computer-graphics-generated scenes from any of this summer’s blockbuster movies could be rendered with cinematic quality in real time. Suppose a car designer could model a car that’s indistinguishable from a photograph. Or imagine a jet fighter training simulation that could look not “just pretty good,” but be so exact that you couldn’t distinguish the simulated scenery from the real thing. Or suppose a physician could see tumors one-third the size of what could previously be identified. These things are not only possible-they have already begun. The new frontier in graphics realism has been established with developments to the OpenGL® application programming interface (API), released by SGI (NYSE: SGI) and the OpenGL Architecture Review Board (ARB). The OpenGL® 1.5 specification includes the revolutionary OpenGL® Shading Language, official ARB extensions that are expected to form the foundation of the upcoming OpenGL® 2.0 version of this cross-platform, open-standard API for advanced 3D graphics.
How is this "revolutionary OpenGL(r) Shading Language" any different than Cg or, better still- freakin HLSL? Plus, these are ARB extensions, not additions to the core (well yet, but still). Is it just me or was this press release written by either Scott McNeally or the Iraqui Minister of Information?
quote:
“OpenGL 1.5, and the OpenGL Shading Language in particular, does for the next generation of graphics what OpenGL did for the first generation in the early ’90s. It will fundamentally change the industry,” said Shawn Underwood, director of marketing, Visual Systems Group, SGI.

Brian J
DL Vacuum - A media file organizer I made | Mumbling Miles - A band I was the guitarist/vocalist for

Share this post


Link to post
Share on other sites
Methinks it''s just a stepping stone and something to whet appetites. It''s written by a marketing director, so what can you expect?

I dont know anything about this openGL shading language, however. I doubt it''s any different than cg, or any other hlsl for that matter...they all let you do the same thing in the end, don''t they? Maybe it''s easier to set up than CG (hopefully, even though I managed to do it while, er, intoxicated, so it''s not that hard hehe)

I guess I''ll have to check it out once i get out of work.

Share this post


Link to post
Share on other sites
LOL! Thanks Brian, I needed that! I'm now much happier this morning.

I have not looked at it in detail yet but I hope this is the same as what was planned for the so-called OpenGL 2.0. I'd hate to see it go in two different ways.

[edited by - mauman on July 28, 2003 10:14:58 AM]

Share this post


Link to post
Share on other sites
In terms of syntax, it''s very similar to Cg or HLSL. One of the fundamental differences is that both Cg and HLSL are compiled to the lower level assembly language before being passed to the driver. With OGLSL, the high level code is passed directly to the driver. The advantage of this is that it gives IHVs more freedom to innovate under the hood, since they aren''t limited to a (relatively narrow) assembly interface. It also allows for greater portability.

I''ll know much more about it after the course on it today at SIGGRAPH.

Incidentally, NVIDIA will be adding an OGLSL profile to the Cg compiler; i.e. you''ll be able to compile Cg into OGLSL. This is to take advantage of what I just mentioned.

Share this post


Link to post
Share on other sites
quote:
Original post by wild_pointer
When can we expect an implementation?



Catalyst 3.4 drivers were OpenGL2.0 ''compatible'', in the sense that I did run the OpenGL 2.0 examples on my RadeOn 8500 with those drivers. (even if it failed to render most things)

Later ATI drivers don''t support OpenGL2.0 anymore.

-* So many things to do, so little time to spend. *-

Share this post


Link to post
Share on other sites
I have the catalyst 3.6 drivers, and they still expose the old GL2 glslang interface, and that still works.. its some differences between GL2 and the ARB version, so you cannot run the example exe from 3dlabs, but it was pretty easy to set it up and test it. but i guess that a ''real'' implementation should come with the next set of drivers (3.7) but maybe not exposed in the extensionstring, just like VBO was at the beginning.

Share this post


Link to post
Share on other sites
sounds like its only going to create more problems as the drivers will get incredibly complex, and will stuff up even more than they do now, causing huge ammounts of frustration and confusion for all of us writing accelerated apps.
yay.

and this whole point of "being able to optimize it more" sounds like a whole load of B$ to me.
if this were true, wouldnt intels next pentium execute C code natively as it would be sooooomuch faster...
[sorry for the deliberately inflametery statements, but it had to be said...]

Share this post


Link to post
Share on other sites
quote:
Original post by aboeing
and this whole point of "being able to optimize it more" sounds like a whole load of B$ to me.
if this were true, wouldnt intels next pentium execute C code natively as it would be sooooomuch faster...
[sorry for the deliberately inflametery statements, but it had to be said...]



The C program will be compiled by the driver when loaded. The hardware will not execute C code natively, it will still run machine code/asm (however it [the driver] can freely choose which asm/machinecode to run).

A more appropriate analogy would be to that of open-source software where you compile it as a part of the installation process. If you were to choose intel''s own compiler, you would get more optimized code. (although as a run-time vs. install-time comparison, slightly flawed)

I do believe that some speed-improvements are to be had over Cg for non nvidia-cards, but that is more due to the fact that no closely matching assembler exist for most cards, and the Cg compiler was optimized for nvidia''s cards in mind. It would be nice if vendors still expose up-to-date assembler implementations though.

Share this post


Link to post
Share on other sites
so now every gfx card manufacturer will get to write thier own compiler..
given that i have already found a number of bugs with nVidia''s CG compiler, and im sure we have all encountered bugs with MSVC, and other compilers, what are the odds of anyone being able to produce a decent system?

writing a good, optimized compiler is no easy task.

why not make graphics cards just like all other processors out there? they have thier own assembly language, and you use whatever-companies compiler to compile programs for thier cards.
I dont like the idea of being stuck with some higher level language.. its like saying, you can only code for the pentium5 in C#, it wont compile C,pascal,fortran,whatever code...

my whole point with the pentium thing, is that optimizing for one processor is also optimizing for others, and there isnt much need for seperate assmebly languages from my viewpoint. besideswhich, games are not like other products, they only stick around for a year or two. (so it doesnt matter (asmuch) if a new assembly language is introduced every year..) [iirc, when intel added thier complete outoforderexecution to thier compiler, they also found around a 20% speed increase when the code was run on 486''s.]

also, consider the increased effort factor in developing drivers for new graphics cards, and even worse, for any company trying to enter the industry, they would have to, not only build a complex processor, but also a top-notch compiler.

might aswell rename opengl to ATI_NVIDIA_GL..
(incidentily, i wonder who proposed this extension...)

oh and one last thing, as vember pointed out, the compile time issue, which actually doesnt seem so bad now, because CG programs tend only to be a few lines, but what about when CG programs get more complex, like some of the Renderman codes for movies, and even worse, optimizing compilers have a lot more thinking to do..

Share this post


Link to post
Share on other sites
Do we really need this debate here as well?

The GLSlang is approved and will be implemented, a few meaningless threads in some forums wont change that..

If the vendors like, they can use the Opensource frontend compiler that 3dlabs gives away for free, then all the vendors fixes the bugs in the front end. The backend compiler is still up to each vendor to implement, but that shoudnt be much harder than ''compiling'' the asm language we have now, so i dont think it will introduce more bugs, and it alone wont cure the drivers either ( there are plenty of bugs in the ARB_*_programs of both vendors)

And gfx cards works a bit like all other processors. so you can take a C program and compile it to SPAC, motorola, and Intel. but you cant take ASM and do the same thing.. In that sence C language is a good thing.

I dont have a real opinion on whatever it should be in the driver or not, but i do know that the Opengl API is there to allow us to write the same program for all graphicscard, an exposed lowlevel vendor specific ASM language wont do that ( a highlevel ARB asm language might, but then you got compile problems in the drivers again)..

For more flamed information, please read the Advanced openglforum on www.opengl.org ( you find a thread with ~80 messages in the top 10 right now )

Share this post


Link to post
Share on other sites
true true, we don''t need a flamewar here as well, especially since we all want the same thing in the end. So we should keep it clean and intelligent.

"also, consider the increased effort factor in developing drivers for new graphics cards, and even worse, for any company trying to enter the industry, they would have to, not only build a complex processor, but also a top-notch compiler."

Is there actually anything that would prevent third-party compilers that could be integrated in the drivers? (if the vendor allows of course) One could as an example make a glslang to ARB_VP/ARB_FP-compiler which would bring support to current cards.

Share this post


Link to post
Share on other sites
sorry guys, this was just the first place i heard about it..
to me, this just seems really stupid, but yeah, anyway.

and since most vendors dont have open-source drivers, i wouldnt imagine that making your own compiler for their cards would be easier (iirc there was some thread here recently about programming the GPU directly - i think the conclusion was : you dont have a chance)

quote:

And gfx cards works a bit like all other processors. so you can take a C program and compile it to SPAC, motorola, and Intel. but you cant take ASM and do the same thing.. In that sence C language is a good thing.



anywho, all im saying, is that it would be great if that was the case for gfx cards too. that is, all video cards would have their own assemblies, and that you could then use some compiler to compile your Cg code to thier assembly languages, or if someone invents Pascalg you could use that, or whichever language ever comes along.

anyway, im working myself up again, so ill just shut up.
sorry guys.

edit:
the opengl topic that MazyNoc sugested covers all this except better, so if anyone else is interested/concerned i highly suggest reading it:
http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/010071.html

[edited by - aboeing on July 31, 2003 12:21:11 AM]

Share this post


Link to post
Share on other sites

  • Announcements

  • Forum Statistics

    • Total Topics
      628400
    • Total Posts
      2982447
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now