What is PS/VS? and what can it do for my render?

Started by
8 comments, last by DarkSlayer 19 years, 9 months ago
I am sitting and designing my "perfect game engine". Well not so perfect maybe. I am reading alot about engines and graphics and stuff, and most of it is easy to grasp. I understand the math and alot of the graphic stuff. But what is really Vertex Shading and Pixel Shading? I understand they are small programs running on the GPU, and can do stuff... But from there I have a problem to really understand how they are used. My idea is to create a renderer that can be used for any 3D games without using to much configuration or scripting. But the more I read about VS/PS I get a feeling that this becomes more and more impossible. I was thinking of having a renderer which only duty was to render the world. You set the camera in the world, sett how things should be shaded and lighted etc etc. But I fear that VS/PS will make the renderer to specific. One way of making the renderer more general is to use heavy scripting, so that VS/PS can be created or heavily modified. BUT ... I have really no qlues about VS/PS. I have googled, been at nehe, read the shader artickles I found here at gamedev .. and yes they are technical and fine .... but I still don't know what VS/PS are ... and most importantly ... how to create a general renderer that can be used for both fps, rpg and rts games
Advertisement
VS and PS wont make the renderer more specific, infact if used correctly they should make it less so as you wont have to hard code shaders into the engine, just allow 3rd parties to make them and provide a basic set to work with.

ITs best to treat them as just another effect which can be applied to geometry like textures really.

As for what they 'are', you answered that yourself, they are programs which run on the GPU. No more, no less.
Vertex Shaders are used to perform 'per vertex' operations and Pixel Shaders are used to perform 'per pixel' operations. As to what those operations do, well thats upto you, they can be anything from just transforming the vertex in a VS to outputting the color red in a PS.

How they are used is slightly implimention specific but generally speaking you tell the API that you want to use a perticular shader, pass it any varibles which need passing and then draw like normal, with your shaders doing the work as required.
well i understand pixel shaders, but not vertex shaders? i'm not able to visualize the concept. i mean if i'm not mistaken a vertex are where the edges (or simply lines) meet. so a vertex is just a point. so does it manipulate edges in a group of vertices [such as vertex A, vertex B, and vertex C? (a triangle)] and make it bigger, smaller, curvy, flat? and then use the pixel shader to manipulate the actual image enclosed in those vertices??

am i right, wrong... completely off the mark??

clarity would be nice, thanks you [grin]

Beginner in Game Development?  Read here. And read here.

 

A face, or polygon, in 3D is typically a triangle. A triangle is created by 3 points, or vertices. A vertex shader manipulates vertex data, and state information, to output values for the pixel shader to use.

Typical inputs to a vertex shader are:

Vertex Inputs
1. Position
2. Normal
3. Color
4. Texture Coordinates
5. Bone IDs and weights

Constant Inputs
1. World, View, Projection matrices
2. Light information
3. Texture transformations
4. Bone matrices
5. Fog information
6. Materials properties (Diffuse, Specular, Emissive, Ambient)

The vertex shader then performs all the math on the GPU:
1. Bone transformations
2. World, View, Projection
3. Directional, Point, and Spot lights
4. Texture coord generation (D3D_TCI_CAMERASPACENORMAL, etc)
5. Texture transform
6. Fog calcuation

Ok, but why use a shader? The fixed pipeline does this for me.

1. Many fixed pipe implementations don't do bones.
2. You can do custom lighting models.
3. You can do per pixel lighting.
4. You can do setup for normal mapping
5. You can customize your fog.
6. You can make your mesh "ripple".
7. You can output light as a texture coordinate (Toon shading)
8. You can extrude shadow volumes
9. You can make things sway in the wind.
10. Much, much, more.

You can do all this anyway by transforming everything in software. Shaders are a standard language that allows you to write code that runs on the GPU, freeing your CPU from performing these tasks, AND saving your AGP bus from transferring new geometry all the time. You can upload a model with bone information. When you want to render in a new pose, you just setup a few matrices in constant registers, and render. Lets say you have trees that sway in the wind. Instead of transforming and uploading new tree geometry all the time, you have one copy pre-uploaded to the GPU and you just specify two constants for Wind Direction and Wind Intensity, and have the shader figure out the rest. Also, unless your geometry is highly tesselated, your vertices for your next polygon are probably transformed before the card has finished filling the previous polygon, so all this work comes pretty much free on the GPU too.
This actually make sense...

Namethatnobodyelsetook: (your nic stolen??hehe) ... when you talk about bones, you talk about bones in like ... skeleton animation of the model right? Thats what I think ... ok.

_the_phantom_: You talk about having others to make shaders, and then a system to put these shaders into the engine. How? And how do you control so the right shader is used on just spesific parts of the world (models and particles)?

1 question. "Per Pixel" ... and/or pixel shading. Is that stuff you apply to change the pixels in the texture on the poly? or is it stuff you apply to the "pixels" on-screen after you have rendered the scene? (after the rasterization)

And just to make sure I understand:
A vertex (the point in the triangle) have a normal? so a tringle has 3 normals + the normal on the triangle? And vertex normal are used to calculate how much light the poly gets? as an example?


by the way ... thx for the fast answear. Didn't expect that fast and that good answear.
'per pixel' operations accure during rasterization, infact 'per pixel' is a really bad name for it really because at the time the pixel shaders work they are still technicaly fragments, but the name has stuck it seems.

So, for example, you could make a pixel shader to add two textures together and output that as the color for that fragment.

I'll let someone else handle the normal thing, while i know how it works i cant really explain it [oh]
Quote:Original post by DarkSlayer
_the_phantom_: You talk about having others to make shaders, and then a system to put these shaders into the engine. How? And how do you control so the right shader is used on just spesific parts of the world (models and particles)?


Shaders are just text, so the text can be loaded in and compiled (however its done per API) to produce your shader.

The control is just like any other property being applied to the object, such as a texture. When you go to draw that model you enable the correct shader much like you bind a texture and then draw the model as per normal.

As for how, well thats quite a complex subject in some regards, if you are going for a truely flexible engine its not going to be easy, there is a thread on these boards about a Material/shader implimention which is a good 5 pages long which talks about it (along with a few follow up threads after it finished) Here is where you can find the thread, to find the others you'll have to search.
Bones is skeleton animation, or skinning, yes.

When I said per pixel lighting, I should have said "setup for per-pixel lighting".

In 3D your vertices usually have a normal which is the average of all the face normals around the vertex, with the exception of vertices at sharp edges. For example on a cube each corner would have 3 vertices with independant normals. Face normals typically aren't used, or even stored, after vertex normals are calculated.

For standard lighting, you calculate light at each vertex and output it as color. Per pixel (on screen), the color is an interpolation of the 3 vertex colors. This interpolation makes the lighting change smoothly over the face, instead of abruptly at the edge of the polygon. It also means that lighting between vertices is just a linear change, with no way to display light changes smaller than the distance between the vertices... such as specular highlights.

Bear with the example, even if you don't understand the math...

A directional light is usually computed as normal dot lightdirection for diffuse light PLUS (normal dot halfvector) to the power of n for specular highlights.

Now, if we calculate this per vertex we get slightly wrong diffuse light, and very choppy specular light. If instead, we just output the normal and halfvector to the pixel shader, and have it do the dot products per pixel, we get accurate per pixel lighting.

Vertex shaders and pixel shaders often work together like this.
Vertex Specular Example
PerPixel specular Example
ok - I understand better now.

I'll have to sit down and think through how to implement the shader stuff in the renderer pipeline, but I guess I won't get a full grasp of it until I'm coding the stuff.

thx guys - very nice of you helping.

This topic is closed to new replies.

Advertisement