Archived

This topic is now archived and is closed to further replies.

NaliXL

OpenGL 2.0 shading language compiler?

Recommended Posts

Feel a bit ashamed of asking this after living for half a year or so between OpenGL forums and code, but what is this? Okay, shading is the process of finding the right illumination for a triangle/quad/whatever, right? So the shading-language = I guess the GL instructions you can use to control this shading process, right? Then what is this compiler? After all, isn''t OpenGL all about libraries, and you using a compiler from any vendor? Thanks for clearing this up! PS : My apologies if my English is bad.

Share this post


Link to post
Share on other sites
It''s for vertex/pixel shaders. The idea is to have one language and one compiler so we don''t have to worry about using one of each from ATI and nVidia to stay compatible.

Share this post


Link to post
Share on other sites
Wait now you got me confused! Dose this mean theres gona be an OpenGL language for 2.0 and you wont be able to code OpenGL with C++?

Share this post


Link to post
Share on other sites
nVidia and ATI have both released their own versions of what they call "vertex and pixel shaders." These are programs run by a special processing unit on the ''GPU'' that perform specific instructions either on a per-vertex or per-pixel level. For example, one could write a vertex shader to create a transparent "shell" around a model like the Quad Damage effect in Quake III - though Quake III does this without vertex shaders, on the CPU. One could use pixel shaders to create a quality bumpmapping effect. You can go to the vendors'' websites (particularly nVidia''s) to see them brag about what their vertex and pixel shaders can do.

These vertex and pixel shaders are written in an ASM-like language and uploaded to the graphics card. Although the main program can be written in C++, the vertex shader still must be sent in a low-level form.

The creators of the "shader compiler" created a high-level language to describe vertex and pixel shaders. Their compiler creates GPU code from shader code written in this high level language.

You do not need vertex and/or pixel shaders in order to light your geometry using traditional vertex-lighting techniques, nor do you need them to perform texture mapping or any other standard technique. They are useful, and the shader compiler therefore seems useful because it appears to help you author them. However, since it sounds like you''re new to OpenGL, I''ll give you this advice: wait a bit before you use the vertex and pixel shaders. If you''re not and simply haven''t been keeping up with the lingo that the marketing people have been inventing, then I apoligise for misinterpreting your post.

I hope that answers your question.

Share this post


Link to post
Share on other sites
You could have a look at the hardcore game programming articles on gamedev about the vertex/pixelshaders...

They are for directx but maybe you get the idea when you read that

And with opengl 2.0 you still can use it with vc++.. the compiler where NaliXL is talking about is only for things like pixel and vertex shaders(you now, those new capabilities for the geforce 3/4 and the new ati cards)

As far as I now those instruction are still a little basic and are really compiled in another program.. but you can achieve very realstic and nice effects with them

[edit] I just missed TerranFury's post.. Is it really difficult to program with PS/VS? I now what they are and saw some pictures of what you can make with them but how difficult is it?

[edited by - Scheermesje on May 3, 2002 4:22:16 PM]

Share this post


Link to post
Share on other sites
Ah, now I think I get it. So within a while, NVidia and ATI will both release new OpenGL 2.0 compatible drivers, with a shitload of nice effects, compiled for OpenGL 2.0 with this shading language compiler? So this stuff is mainly for hardware manufacturers, isn''t it?

Share this post


Link to post
Share on other sites
Ok kool just wanted to make sure I could still use c++ and on linux.

thx for clearing that up!

Share this post


Link to post
Share on other sites
No, the stuff is not mainly for hardware manufacters..

The programmer who makes the program can use this instructions to acces the hardware more directly. But they are in a sort of ASM and need to be compiled..

Share this post


Link to post
Share on other sites
Okay, so I can create those effects myself, but I''ll need to know this assembler-like language? Hmm, I think I''ll have something new to study the next few months....

Share this post


Link to post
Share on other sites
Well, the intent of this is that eventually, you *won't* need to know the assembly-like language. You can write your shaders in this higher level language, and have it work on all hardware that supports it.

Oh, and it'll probably be a while before ATI and nVidia release OpenGL 2.0 compatible drivers. I'd guess that the spec is at least a year away from even being finalized. Part of the reason for 3D Labs releasing this compiler was to push things forward a bit.

Share this post


Link to post
Share on other sites
>>Well, the intent of this is that eventually, you *won''t* need to know the assembly-like language. You can write your shaders in this higher level language, and have it work on all hardware that supports it.<<

this is correct, thats why i feel learning the current asm instructions are a waste of time. if u wanna see a more realistic example of how opengl2.0 shaders are gonna be see here
graphics.stanford.edu/projects/shading/ ( IIRC examples even worked on my tnt! )

>>Oh, and it''ll probably be a while before ATI and nVidia release OpenGL 2.0 compatible drivers. I''d guess that the spec is at least a year away from even being finalized. Part of the reason for 3D Labs releasing this compiler was to push things forward a bit. <<

from a pdf

• Jun 02 – First draft of complete spec
• Jul 02 – SIGGRAPH 2002 activities, public review draft of spec?
• Sep 02 – Final spec changes?
• Dec 02 – Ratified spec?

http://uk.geocities.com/sloppyturds/gotterdammerung.html

Share this post


Link to post
Share on other sites
I hope they release the opengl 2.0 drivers fast

At this point I sometimes have the feeling that directx is ahead of opengl and I hope that opengl 2.0 will be better(man, my english is bad.. i wanted to say something different :D)

I have seen some screenshots from things you can do with the VS/PS and they look really cool so maybe it''s not a waist of effort to learn them now...

Does anyone now a good startingpoint to learn using the VS/PS in opengl?

Share this post


Link to post
Share on other sites
if you want to do it on nvidia hardware go to the nvidia page, if you want to do it on ati hardware, go to the ati page.
they have enough infos..

whats the power of ps/vs?

you can _REPLACE_ the current vertex programs / pixel programs by your own..

means you can transform your geometry on screen with your own code (for example non-linear projections, matrix-palette-skinning, automatic generated smooth animations etc..)
you can enlighten your geometry on your own (means do your own vertexlights, like a line-light or something, if you need this, or set texcoords and everything up to do then perpixel the lighting in a pixel program (bumpmapping, perpixellighting, environmental maps on bumpmaps etc..))

or you can be a freak and simply create stuff never seen before..

play halo and you get an early idea what pixelshaders can do (take a look at the walls for example in the alienships.. or the spotlights of you and your team in the dark shining on the bumpy surface)
play wreckless to get an early idea what pixelshaders can do (they do imagepostprocessing to let it look like an old movie, to let everything bright glow and much other stuff)

gpu''s get programable. that means you need some compiler to generate the program..

_THIS_ is the compiler for opengl2.0, a c-style-compiler instead of the assemblers till now..

if you want to know what this is all capable, watch shrek
and for the creative guys:
http://tyrannen.starcraft3d.net/loprecisionraytracingonatiradeon8500.jpg
this is an image of a RAYTRACER completely on the gpu done on a radeon8500.
this is done with 8 bits precision per vector component..
next year gpu''s will support floatingpoints.. then this can be done in full precision..

the future is bright, the future is programable.. (and we all go for raytracing.. finally easy shadowing of everything, finally easy accurate lightingshemes with correct reflections and refractions.. and, with enough supersampling, all with softshadows, softreflections etc.. and in 10 years we have global illumination realtime, like our real world..)

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
quote:
Original post by zedzeek
from a pdf

• Jun 02 – First draft of complete spec
• Jul 02 – SIGGRAPH 2002 activities, public review draft of spec?
• Sep 02 – Final spec changes?
• Dec 02 – Ratified spec?

Yeah, I''ve seen that, but I don''t put a whole lot of faith in it, since it''s 3DLabs plan, and there is some resistance to bringing it about that quickly, especially from NVIDIA, who want to move to 2.0 more gradually.

Share this post


Link to post
Share on other sites
quote:
Yeah, I''ve seen that, but I don''t put a whole lot of faith in it, since it''s 3DLabs plan, and there is some resistance to bringing it about that quickly, especially from NVIDIA, who want to move to 2.0 more gradually.

Now that''s tactics! Slowing OpenGL 2.0 development down so much, that agains the time it comes out, every gamer thinks OpenGL is crappy shit (directx will be better by the time), and they don''t want it anymore. And NVidia will by that time only have to develop DirectX drivers. Saves ''em time and money...

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by NaliXL
[quote]Yeah, I''ve seen that, but I don''t put a whole lot of faith in it, since it''s 3DLabs plan, and there is some resistance to bringing it about that quickly, especially from NVIDIA, who want to move to 2.0 more gradually.

Now that''s tactics! Slowing OpenGL 2.0 development down so much, that agains the time it comes out, every gamer thinks OpenGL is crappy shit (directx will be better by the time), and they don''t want it anymore. And NVidia will by that time only have to develop DirectX drivers. Saves ''em time and money...


That''s silly. nVidia employees are amoung the foremost proponents of OpenGL. They just want to make sure they get their say in the process.

Share this post


Link to post
Share on other sites
quote:
Original post by NaliXL
Now that''s tactics! Slowing OpenGL 2.0 development down so much, that agains the time it comes out, every gamer thinks OpenGL is crappy shit (directx will be better by the time), and they don''t want it anymore. And NVidia will by that time only have to develop DirectX drivers. Saves ''em time and money...

Their motivations are quite the opposite, actually. What they''re worried about is that even if everyone goes gung ho on getting 2.0 out as soon as possible, it''s still going to take a while, and that in the meantime, OpenGL will continue to lose ground to Direct3D, so that when 2.0 finally arrives, it''ll be too late. They''re proposing to release at least one more update (1.4) before 2.0 comes out to keep things competitive.

Share this post


Link to post
Share on other sites
matrox develops gl2.0 supporting hardware
ati develops gl2.0 supporting hardware
3dlabs develops gl2.0 supporting hardware
nvidia does as well

why does nvidia try to block it?
because of xbox
if the major revolution in 3d grafics will come (and gl2.0 will be the beginning of it), theyr investiation xbox will get problems caused by pc-hardware.. they made their own enemy..

thats why gf4 is nothing bether than gf3 (except for speed)
thats why nvidia does not want to have technical new hardware before 2k3..

if they don''t move on, they''ll lose. for sure

ati radeon 10''000 is comming (r300 chip) and ati is yet now more advanced than nvidia technically (worlds best gpu is radeon 8500 (r200). possibly not fastest, but best in technique)

http://tyrannen.starcraft3d.net/loprecisionraytracingonatiradeon8500.jpg

ati can yet raytrace.. every gl2.0 hardware will have the power to raytrace with floatingpointprecision.. (this image is in 8bit precision for the vector-components!)

gl2.0 is the beginning of the next evolution. pushing rastericers is finished, now push stream-processors.
why?
with stream-processors you can:
rasterice (what we do for now) very fast
raytrace (with some nice structure) very fast
encode/decode video-streams very fast
process physic-routines very fast (update 10 to 100 million particles per second.. or do you want 1gig?.. 10gig? 1tera? we''ll see)
process images (photoshop in hardware.. all effects realtime)

and we finally can drop the basic unit called triangle..
the next basic unit is float and float4.
textures are vertexarrays are screenbuffers are indeces are photoshop-images are mp3-files are models are heightmaps are fast fourier transformed waves from titanic or "a perfect storm" or are simply textures, as before

finally walls are beginning to break.. faster than initially thought.. great, thats how pc''s kicks ass..

one cpu to coordinate, one cpu to calculate (previously called gpu.. now spu, streaming processor unit?)

and.. do you want to program this all in assembler? no.. so there it is: the shading language compiler

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
quote:
Original post by davepermen
and we finally can drop the basic unit called triangle..
the next basic unit is float and float4.

Now that sounds interresting! Where can I find more about these float and float4 units?

At least I''m glad to head that OGL2.0 is coming up quite fast, although it sounds like I''ve got a lot to learn...

Share this post


Link to post
Share on other sites
well its easy:
you can set your data free.. vertex arrays can be floats (as today) or ints, shorts, chars what ever
texture data can be chars like now, or shorts, or ints, or floats as well (finally! yeah!)
the vertex pipelines run in 32bit floatingpointpresition
the pixel pipelnes run in 32bit floatingpointprecition as well!

the framebuffer is 128bit res with 4 floats for rgba...

that will make everything united..

and the programability realises it that you can take the screenbuffer as vertexarray as input.. means you can create a vertexbuffer, and render onto it (updating physics or animations completely on gpu) etc..

we''ll have some fun then

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
Who have 3Dlabs hardware?

Everyone on this site talk about ATI and Nvidia but it''s 3D lab that suggest all thoses new fatures to the ARB.

I rarely see cards based on 3D Labs GPU. They seam quite unpopular.

Share this post


Link to post
Share on other sites
__ALex_J_ : they are not unpopular. They are EXPENSIVE!

You should never let your fears become the boundaries of your dreams.

Share this post


Link to post
Share on other sites
hehe... 3DLabs... unpopular... hehehe... you're funny

Strangely enough, the graphics industry does still exist beyond the borders of Consumersville. All these features coming out on nVidia and ATI graphics cards are by no means new and daring... they have simply been pushed down from the workstation level to the financial reach of the average Joe.

[edit] - I'm sure many of you have already seen this, but for those who haven't (or still wish to read in awe ), this is 3DLabs' latest effort to kickstart OpenGL 2.0 ...I am thinking I may well hold off on buying a new vid card for a little while
*starts scraping pennies into a jar*

[edited by - Bad Monkey on May 5, 2002 9:21:22 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by davepermen
vertex arrays can be floats (as today) or ints, shorts, chars what ever texture data can be chars like now, or shorts, or ints, or floats as well (finally! yeah!)

Okay, I know these data-types. But why would using them drop the triangle as a basic unit? And with what would the triangle be replaced then?

Share this post


Link to post
Share on other sites