Vertex shader questions!

Started by
1 comment, last by Bonehed316 18 years, 10 months ago
Let me preface by saying that I've only been working in directx for about two weeks now, give or take, so if I ask silly questions just slap me around a little and I'll remember next time. I'm not new to programming in general, but some of this stuff is kind of bizarre to me still. Okay, I'm currently working on a renderer for my engine. I had it working with FVF declarations a few days ago, but have since totally revamped the entire structure of it to add state tracking and resource managing and generic engine types (instead of binding the engine to d3d) and a plethora of other nice things to make it able to handle its own when it gets out into the "real world." So far thats gone well. As near as I can tell everything is working fine, but I can't really test it since there is nothing that actually draws anything anymore, since I removed my render-the-n00b-cube functionality for the new engine controlled design. So today I decided to adopt a vertex shader pipeline which is working perfectly except for a few small problems. First let me explain that I am using the HLSL language and am compiling the shader at runtime from the file. I can change this all later on, porting the thing to asm or whatever is best shouldnt be too bad once I get it working (and figure out what its doing). I havent set up anything to manage loading/storing textures just yet, so for now were going to hope that rendering with a null texture actually makes something show up (it worked fine for the FVF version). That also reminds me to make sure I'm not rendering to a null render target... Anywho, a few thing are still confusing me at the moment about how vertex shaders actually work. With the FVF, you had to use SetTransform to set your matrices before you render, which was working great. From what I understand, however, vertex shaders do not rely on this method to do their transformations, so you have to do all of that in your shader, which is really why I need the shader to begin with. All I'm trying to do is get the vertex shader to simply render objects based on the matrices that already exist, so what exactly do I need to do to them to make the vertex shader happy? Currently, I'm sending a matrix to the shader via SetVertexShaderConstantF, and as far as I can tell that is working properly, though I'm unsure what matrix I should be sending. I've read something about concatinating the three normal matrices (world, view, and screen) and then transposing them, and sending the result to the shader, but I'm not sure if thats right (I've tried it, but it hasnt worked). My next problem would be what to do with that matrix once its there. I dont really understand the maths, although I do know a lot of the basics of vectors and matrices, I just dont completely follow. Eventually I will be working with normals and tex coordinates as well, but for now I just want the geometry to go where its supposed to. I am already sending my full vertices with normals and a set of tex coords to the shader, but it appears only the positions are making it, and they all seem to be wrong (for example a lot of them are to a very high or very low (negative) power, which can't be right). As far as I know the vertices are being copied correctly to the vertex buffer, I am using the same method as I did with the FVF, so I'm not sure what is happening with that, heh. So far I have yet to see anything render except my clear color, which is blue.
Advertisement
Rendering with a null texture is just normal, valid rendering. It will work with a shader if the shader does not require a texture. This can be set up.

With the FVF, you had to use SetTransform to set your matrices before you render, which was working great. You have to distinct the fixed-function pipeline and the programmable pipeline. The fixed-function pipeline works with FVF but does not allow you to program anything; just basic render settings. The programmable pipeline obviously allows this. Incorporating the programmable pipeline required that FVFs were replaced by vertex declarations. In this way, you can think of FVFs as obsolete.

Because each shader holds a full set of device settings, anything set directly by for example SetTransform() will not make it to the shader. The shader has its own context. Because the pipeline is programmable, you have to specify what it should do and this includes specifying how vertices are to be transformed. The most common vertex transform would be, as you said, multiply it by world, view and projection matrices respectively.

This can be done in two ways: you could multiply this matrix in your program and send it is a whole down the shader. This is efficient, because you can do the computation once and use it for all vertices. You can also send the individual three matrices to the shader and multiply them there. This is flexible, because now you can do more cool things.

Sending things to a shader can also be done in multiple ways. One way is directly setting values with SetMatrix() and such. This is inflexible, because the programming calling those functions needs to know specifically about the shader it is used with. A better way is to use the HLSL semantics, which decouple this tie.

So now, for the most simple approach we choose to:
- send the world-view-projection matrix as one matrix
- send it by semantic
- simply transform and project a vertex

float4x4 matWorldViewProjection : WORLDVIEWPROJECTION;// Define VS_INPUT and VS_OUTPUT here// ...VS_OUTPUT VSTransform( VS_INPUT vInput ){  VS_OUTPUT vOut = (VS_OUTPUT)0;  vOut.Position = mul( float4( IN.Position, 1 ), matWorldViewProjection );  return vOut;}technique TSimple{  pass p0  {        // Insert other fixed-function pipeline settings here        // ...	VertexShader = compile vs_1_1 VSTransform();	PixelShader  = NULL;  }}


Good luck! Greetz,

Illco
Thanks, that did help a lot. Once I knew I was doing the shader correctly, I knew where else to look.

My problem was a pointer problem. I had the idea to have one copy of vertices managed by the engine and everything that uses it has a pointer to that pointer (since its stored in an array), so somewhere a long the line I decided it would be "better" to just have the vertex stream (thats what I call it) copy the vertices to the vertex buffer pointer itself, instead of calling memcpy in the renderer (which is what I was doing with the FVF). Maybe I'll change that later, who knows.

Anyway, thanks for the info. Now that I actually see something, I have a better idea of what is happening.

This topic is closed to new replies.

Advertisement