OpenGL 1.5's "Shader Language"

Started by
12 comments, last by bjmumblingmiles 20 years, 9 months ago
quote: Suppose, instead of taking months to create, the breathtaking computer-graphics-generated scenes from any of this summer’s blockbuster movies could be rendered with cinematic quality in real time. Suppose a car designer could model a car that’s indistinguishable from a photograph. Or imagine a jet fighter training simulation that could look not “just pretty good,” but be so exact that you couldn’t distinguish the simulated scenery from the real thing. Or suppose a physician could see tumors one-third the size of what could previously be identified. These things are not only possible-they have already begun. The new frontier in graphics realism has been established with developments to the OpenGL® application programming interface (API), released by SGI (NYSE: SGI) and the OpenGL Architecture Review Board (ARB). The OpenGL® 1.5 specification includes the revolutionary OpenGL® Shading Language, official ARB extensions that are expected to form the foundation of the upcoming OpenGL® 2.0 version of this cross-platform, open-standard API for advanced 3D graphics.
How is this "revolutionary OpenGL(r) Shading Language" any different than Cg or, better still- freakin HLSL? Plus, these are ARB extensions, not additions to the core (well yet, but still). Is it just me or was this press release written by either Scott McNeally or the Iraqui Minister of Information?
quote: “OpenGL 1.5, and the OpenGL Shading Language in particular, does for the next generation of graphics what OpenGL did for the first generation in the early ’90s. It will fundamentally change the industry,” said Shawn Underwood, director of marketing, Visual Systems Group, SGI.
Brian J
DL Vacuum - A media file organizer I made | Mumbling Miles - A band I was the guitarist/vocalist for
Brian J
Advertisement
Methinks it''s just a stepping stone and something to whet appetites. It''s written by a marketing director, so what can you expect?

I dont know anything about this openGL shading language, however. I doubt it''s any different than cg, or any other hlsl for that matter...they all let you do the same thing in the end, don''t they? Maybe it''s easier to set up than CG (hopefully, even though I managed to do it while, er, intoxicated, so it''s not that hard hehe)

I guess I''ll have to check it out once i get out of work.
LOL! Thanks Brian, I needed that! I'm now much happier this morning.

I have not looked at it in detail yet but I hope this is the same as what was planned for the so-called OpenGL 2.0. I'd hate to see it go in two different ways.

[edited by - mauman on July 28, 2003 10:14:58 AM]
---CyberbrineDreamsSuspected implementation of the Windows idle loop: void idle_loop() { *((char*)rand()) = 0; }
In terms of syntax, it''s very similar to Cg or HLSL. One of the fundamental differences is that both Cg and HLSL are compiled to the lower level assembly language before being passed to the driver. With OGLSL, the high level code is passed directly to the driver. The advantage of this is that it gives IHVs more freedom to innovate under the hood, since they aren''t limited to a (relatively narrow) assembly interface. It also allows for greater portability.

I''ll know much more about it after the course on it today at SIGGRAPH.

Incidentally, NVIDIA will be adding an OGLSL profile to the Cg compiler; i.e. you''ll be able to compile Cg into OGLSL. This is to take advantage of what I just mentioned.
When can we expect an implementation?


[My site|SGI STL|Bjarne FAQ|C++ FAQ Lite|MSDN|Jargon]
Ripped off from various people
[size=2]
quote:Original post by wild_pointer
When can we expect an implementation?


Catalyst 3.4 drivers were OpenGL2.0 ''compatible'', in the sense that I did run the OpenGL 2.0 examples on my RadeOn 8500 with those drivers. (even if it failed to render most things)

Later ATI drivers don''t support OpenGL2.0 anymore.

-* So many things to do, so little time to spend. *-
-* So many things to do, so little time to spend. *-
I''ve seen it running on ATI hardware, so they must have a prototype driver at least. I''ve also seen it on 3D Labs hardware, and they have a publically available driver.
I have the catalyst 3.6 drivers, and they still expose the old GL2 glslang interface, and that still works.. its some differences between GL2 and the ARB version, so you cannot run the example exe from 3dlabs, but it was pretty easy to set it up and test it. but i guess that a ''real'' implementation should come with the next set of drivers (3.7) but maybe not exposed in the extensionstring, just like VBO was at the beginning.
sounds like its only going to create more problems as the drivers will get incredibly complex, and will stuff up even more than they do now, causing huge ammounts of frustration and confusion for all of us writing accelerated apps.
yay.

and this whole point of "being able to optimize it more" sounds like a whole load of B$ to me.
if this were true, wouldnt intels next pentium execute C code natively as it would be sooooomuch faster...
[sorry for the deliberately inflametery statements, but it had to be said...]
quote:Original post by aboeing
and this whole point of "being able to optimize it more" sounds like a whole load of B$ to me.
if this were true, wouldnt intels next pentium execute C code natively as it would be sooooomuch faster...
[sorry for the deliberately inflametery statements, but it had to be said...]


The C program will be compiled by the driver when loaded. The hardware will not execute C code natively, it will still run machine code/asm (however it [the driver] can freely choose which asm/machinecode to run).

A more appropriate analogy would be to that of open-source software where you compile it as a part of the installation process. If you were to choose intel''s own compiler, you would get more optimized code. (although as a run-time vs. install-time comparison, slightly flawed)

I do believe that some speed-improvements are to be had over Cg for non nvidia-cards, but that is more due to the fact that no closely matching assembler exist for most cards, and the Cg compiler was optimized for nvidia''s cards in mind. It would be nice if vendors still expose up-to-date assembler implementations though.

This topic is closed to new replies.

Advertisement