Vertex Shaders

Started by
13 comments, last by UponTheEnd 20 years, 9 months ago
You will hardly find a website that tells you "how to make shaders". Rather you will find examples, sometimes with source code.

I have submitted one to NeHe Productions a while ago. It used a vertex program for the cel-shading tutorial. I think you still can find it in the downloads sections under the letter C at nehe.gamedev.net.

Since some of you seem to be interested in shaders, I think I'll submit another demo I've done around cel-shading, but this time with both vertex and fragment programs, and even vertex buffer objects. </off topic>

[edited by - vincoof on June 24, 2003 2:43:58 PM]
Advertisement
Two useful things to look up would be Nvidia''s CG (very recommended - it supports Nvidia & ATI cards with shader support), and possibly Microsoft''s High Level Shading Language (though I''m pretty sure that''s D3D9)

Hi vincoof. I''ve been messing with shaders in DX8 for about 4 months or so on & off, and I''ve always written my programs thusly in native shader language (pay no attention to the actual code, just the style):

const char Dot3VertexShader[] =
"vs.1.1 //Shader version 1.1\n"
"m4x4 r0, v0, c4\n"
"mov oPos, r0\n"

"m4x4 r4, v0, c20\n" //Translate vertex into world coordinates

//Tangents into object space

"m4x4 r3, v9, c14\n" //Translate Tangent
"dp3 r3.w, r3, r3\n"
"rsq r3.w, r3. w\n"
"mul r3, r3, r3.w\n"

"m4x4 r5, v3, c14\n" //Translate Normal

"dp3 r5.w, r5, r5\n"
"rsq r5.w, r5. w\n"
"mul r5, r5, r5.w\n" //Re-normalise it

//Make binormal
"mul r0,r3.zxyw,r5.yzxw \n"
"mad r7,r3.yzxw,r5.zxyw, -r0 \n"

"dp3 r7.w, r7, r7\n"
"rsq r7.w, r7. w\n"
"mul r7, r7, r7.w\n"

//Compute light vector L
"add r10, c12, -r4\n"

//Normalize L
"dp3 r10.w, r10, r10\n"
"rsq r10.w, r10. w\n"
"mul r10, r10, r10.w\n"

"dp3 r6.x, r3, r10 // transform light vector, \n"
"dp3 r6.y, -r7, r10 // by TBN matrix \n"
"dp3 r6.z, r5, r10 // r6 is light vector in tangent space\n"

"dp3 r6.w, r6, r6\n"
"rsq r6.w, r6. w\n"
"mul r6, r6, r6.w\n"

"mul r6.xyz, r6.xyz, c33.x\n"
"add oD0.xyz, r6.xyz, c33.x\n"

"mov oT0, v7 //Texture unit 0 \n"
"mov oT1, v7 //Texture unit 1 \n";


I would like to implement shaders in OpenGL also, I was just wondering what language your example was in?
Transformers Rulez!!
So, if I got that right: A Vertex Shader program is executed after every call to glVertex and it modifies the vertex''s properties like color and stuff. But what about Pixel Shaders? Do they go for every pixel you rendered and modifies the pixel''s properties, or what? There can be pretty much pixels on high resolutions - wouldn''t that be pretty slooooow (By the way, can I change a pixel''s color by using a pixel shader? For example to write a night vision for a game, so that my pixel shader turns everything into green or something?)

And what''s that thing with per-pixel lighting and per-vertex lighting? Is that done by shader programs or something completly different - any extension?
danielk : the program you wrote can be ported to an ARB program pretty easily. ARB vertex and fragment programs do have a syntax somewhat similar to vertex and pixel shaders.
About the "language of my example", it is an ARB fragment program, which can be guessed by the starting line "!!ARBfp1.0".

ZMaster : a vertex program does not really modify it rather bypasses certain computations that otherwise would be performed using a static scheme (known as the fixed function pipeline). Vertex programs do bypass the vertex processing part of the pipeline. This is responsible of transforming vertex coordinates from object-space to clip-space, this is also responsible of computing lighting, generating texture coordinate (when TEXTURE_GEN_S is enabled for instance), it also computes fog coordinate, and some other things like that. All of those computations being done per vertex.
Then, when three vertices are defined (in the case of rendering triangles) the rasterization takes place : the renderer scans lines that fill the triangle on screen, and for every pixel to fill, the fragment processing is performed. If you have defined a fragment program, this program is executed, otherwise the standard fragment processing in OpenGL is : texturing stage, then color sum stage, then fog stage (each stage can be ignored if disabled obviously). Fragment programs can be very slow for numerous computations on high resolutions, but executed in hardware it is done pretty fast. Unfortunately, when performed in software fragment programs are really too slow. On the other hand, vertex programs can be executed in software with very good performance. That''s why nVidia proposes a software implementation of the ARB_vertex_program extension for all GeForce that do no support vertex program in hardware : it''s still a bit slower than what it would be if executed in hardware, but that''s affordable. They also did the same for ARB_fragment_program but they only left that as an option (for testing purposes mainly) because the performance hit is unbearable.
Per-pixel lighting is done by the ARB_fragment_program associated to the ARB_vertex_program extension. But keep in mind that generally only one light is treated at a time. That is, for multiple lights, per-pixel lighting needs multi-pass (unless very special cases as usual).

This topic is closed to new replies.

Advertisement