Archived

This topic is now archived and is closed to further replies.

Vertex Shaders

This topic is 5288 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

EveryWhere i look these days at demos or games, or whatever. They are talking about this new thing called Vertex Shaders. When i look at the name, it tells me something but not near as much as its popularity. So, the point is, what are they and what do you use them for? Are they an extension? Thanks

Share this post


Link to post
Share on other sites
They are an extension. What they do is store a list of verticies on the video ram, so you don''t waste time resending them to the video card. This is usually used for static objects, because frequent changes nullifies the advantages.

Share this post


Link to post
Share on other sites
quote:
What they do is store a list of verticies on the video ram, so you don't waste time resending them to the video card. This is usually used for static objects, because frequent changes nullifies the advantages.

You obviously don't know what you're talking about.
There are extensions that store vertices in video memory, but they're not vertex shaders.

Shaders (either vertex or fragment) allow one to reimplement part of the rendering pipeline. So you can replace the standard algorithms with your own, and they'll execute in the GPU.

Edit:

The OpenGL extension names from ARB are ARB_vertex_program and ARB_fragment_program, if I'm not mistaken...

[edited by - HellRaider on June 4, 2003 8:02:09 PM]

Share this post


Link to post
Share on other sites
HellRaider''s right.
Vertex shaders (known as vertex programs in OpenGL, since the word "shading" has no real meaning for vertices) override computations in the pipeline.

Usually, computations are fixed by a certain state, like lighting, fog, automatic texture coordinate generation, and such. Vertex programming allows the user to perform his own computations instead of the fixed function pipeline.

There are many, many cases when vertex programming comes handy : Volumetric fog and volumetric lights, keyframe interpolation, bump mapping, and much more. Vertex programs are often combined with fragment programs (also known as pixel shaders in DirectX).

Share this post


Link to post
Share on other sites
The ARB_fragment_program extension is pretty easy to use in fact.
First of all, you have to create a program and load it.
glGenProgramsARB(1, &program);
glBindProgramARB(GL_FRAGMENT_PROGRM_ARB, program);
glProgramString(GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, strlen(fragment_program_string), fragment_program_string);
where "fragment_program_string" is the program, represented by a string. It looks like :

!!ARBfp1.0
# Simple fragment program sample
ATTRIB texCoord = fragment.texcoord[0];
ATTRIB col = fragment.color.primary;
OUTPUT outColor = result.color;
TEMP texel2D;
# Sample the 2D texture and modulate it with the primary color
TXP texel2D, texCoord, texture, 2D;
MUL outColor, texel2D, col;
END

Then, every time you want to render primitives with that shader, enable fragment programs and bind it :
glEnable(GL_FRAGMENT_PROGRAM_ARB);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, program);
// render the model here.
glDisable(GL_FRAGMENT_PROGRAM_ARB);

And before exiting the program, it is recommended to delete the program :
glDeleteProgramsARB(1, &program);

As you will have guessed, the hardest part is defining the program string that matches the shader of your dreams.

Share this post


Link to post
Share on other sites
What cards support vertex and fragment programs? (can a radeon 7000?)
I wanted to comment and say that UT2K3 uses vertex and fragment programs quite extensively, although UT2k3 I dont think has any GL support

Share this post


Link to post
Share on other sites
Vertex programs are supported in hardware for GeForce3 and up (but not GeForce4MX) and for Radeon8500 and up.
Nvidia also supports vertex programs for ALL the GeForce series but with a software fallback for cards like GF2 and GF4MX. Even in software it runs pretty good.

Fragment programs are supported in hardware for all the GeForceFX series and for Radeon9500 and up (Radeon9000 does NOT support fragment programs).
Nvidia also provides a software fallback (known as the "NV30 emulation tool") for other GeForce cards but it is just for testing purposes because fragment programs are really too damn slow in software.

As for UT2K3 in fact it does not use pixel and vertex shaders intensively, despite the very neat effects in the game. That allows the game to run on old cards (afair UT2K3 does only need a DirectX6 compliant card) with still good graphics.

UT2K3 does have a GL support. A GL renderer have been written for linux (the game was released for windows and linux simultaneously) and is now used for a (recent) Macintosh version of the game. Also, I''ve heard that there is an OpenGL support for windows users which is recommended for systems that do have problems with Direct3D.

Share this post


Link to post
Share on other sites
Thanks! This post was very helpful for me, too, since I was looking for a detailed description of how to Implement Vertex/Pixel Shaders in OpenGL. And it would be really really great if anyone of you would have a link to a homepage showing how to code those Shader programs! So any link or book title, giving a good start into the world of Vertex/Pixel Shaders (with OpenGL) would be great

Share this post


Link to post
Share on other sites
You will hardly find a website that tells you "how to make shaders". Rather you will find examples, sometimes with source code.

I have submitted one to NeHe Productions a while ago. It used a vertex program for the cel-shading tutorial. I think you still can find it in the downloads sections under the letter C at nehe.gamedev.net.

Since some of you seem to be interested in shaders, I think I'll submit another demo I've done around cel-shading, but this time with both vertex and fragment programs, and even vertex buffer objects. </off topic>

[edited by - vincoof on June 24, 2003 2:43:58 PM]

Share this post


Link to post
Share on other sites
Two useful things to look up would be Nvidia''s CG (very recommended - it supports Nvidia & ATI cards with shader support), and possibly Microsoft''s High Level Shading Language (though I''m pretty sure that''s D3D9)

Hi vincoof. I''ve been messing with shaders in DX8 for about 4 months or so on & off, and I''ve always written my programs thusly in native shader language (pay no attention to the actual code, just the style):

const char Dot3VertexShader[] =
"vs.1.1 //Shader version 1.1\n"
"m4x4 r0, v0, c4\n"
"mov oPos, r0\n"

"m4x4 r4, v0, c20\n" //Translate vertex into world coordinates

//Tangents into object space

"m4x4 r3, v9, c14\n" //Translate Tangent
"dp3 r3.w, r3, r3\n"
"rsq r3.w, r3. w\n"
"mul r3, r3, r3.w\n"

"m4x4 r5, v3, c14\n" //Translate Normal

"dp3 r5.w, r5, r5\n"
"rsq r5.w, r5. w\n"
"mul r5, r5, r5.w\n" //Re-normalise it

//Make binormal
"mul r0,r3.zxyw,r5.yzxw \n"
"mad r7,r3.yzxw,r5.zxyw, -r0 \n"

"dp3 r7.w, r7, r7\n"
"rsq r7.w, r7. w\n"
"mul r7, r7, r7.w\n"

//Compute light vector L
"add r10, c12, -r4\n"

//Normalize L
"dp3 r10.w, r10, r10\n"
"rsq r10.w, r10. w\n"
"mul r10, r10, r10.w\n"

"dp3 r6.x, r3, r10 // transform light vector, \n"
"dp3 r6.y, -r7, r10 // by TBN matrix \n"
"dp3 r6.z, r5, r10 // r6 is light vector in tangent space\n"

"dp3 r6.w, r6, r6\n"
"rsq r6.w, r6. w\n"
"mul r6, r6, r6.w\n"

"mul r6.xyz, r6.xyz, c33.x\n"
"add oD0.xyz, r6.xyz, c33.x\n"

"mov oT0, v7 //Texture unit 0 \n"
"mov oT1, v7 //Texture unit 1 \n";


I would like to implement shaders in OpenGL also, I was just wondering what language your example was in?

Share this post


Link to post
Share on other sites
So, if I got that right: A Vertex Shader program is executed after every call to glVertex and it modifies the vertex''s properties like color and stuff. But what about Pixel Shaders? Do they go for every pixel you rendered and modifies the pixel''s properties, or what? There can be pretty much pixels on high resolutions - wouldn''t that be pretty slooooow (By the way, can I change a pixel''s color by using a pixel shader? For example to write a night vision for a game, so that my pixel shader turns everything into green or something?)

And what''s that thing with per-pixel lighting and per-vertex lighting? Is that done by shader programs or something completly different - any extension?

Share this post


Link to post
Share on other sites
danielk : the program you wrote can be ported to an ARB program pretty easily. ARB vertex and fragment programs do have a syntax somewhat similar to vertex and pixel shaders.
About the "language of my example", it is an ARB fragment program, which can be guessed by the starting line "!!ARBfp1.0".

ZMaster : a vertex program does not really modify it rather bypasses certain computations that otherwise would be performed using a static scheme (known as the fixed function pipeline). Vertex programs do bypass the vertex processing part of the pipeline. This is responsible of transforming vertex coordinates from object-space to clip-space, this is also responsible of computing lighting, generating texture coordinate (when TEXTURE_GEN_S is enabled for instance), it also computes fog coordinate, and some other things like that. All of those computations being done per vertex.
Then, when three vertices are defined (in the case of rendering triangles) the rasterization takes place : the renderer scans lines that fill the triangle on screen, and for every pixel to fill, the fragment processing is performed. If you have defined a fragment program, this program is executed, otherwise the standard fragment processing in OpenGL is : texturing stage, then color sum stage, then fog stage (each stage can be ignored if disabled obviously). Fragment programs can be very slow for numerous computations on high resolutions, but executed in hardware it is done pretty fast. Unfortunately, when performed in software fragment programs are really too slow. On the other hand, vertex programs can be executed in software with very good performance. That''s why nVidia proposes a software implementation of the ARB_vertex_program extension for all GeForce that do no support vertex program in hardware : it''s still a bit slower than what it would be if executed in hardware, but that''s affordable. They also did the same for ARB_fragment_program but they only left that as an option (for testing purposes mainly) because the performance hit is unbearable.

Share this post


Link to post
Share on other sites
Per-pixel lighting is done by the ARB_fragment_program associated to the ARB_vertex_program extension. But keep in mind that generally only one light is treated at a time. That is, for multiple lights, per-pixel lighting needs multi-pass (unless very special cases as usual).

Share this post


Link to post
Share on other sites