This topic is 4821 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

##### Share on other sites
Rendering with a null texture is just normal, valid rendering. It will work with a shader if the shader does not require a texture. This can be set up.

With the FVF, you had to use SetTransform to set your matrices before you render, which was working great. You have to distinct the fixed-function pipeline and the programmable pipeline. The fixed-function pipeline works with FVF but does not allow you to program anything; just basic render settings. The programmable pipeline obviously allows this. Incorporating the programmable pipeline required that FVFs were replaced by vertex declarations. In this way, you can think of FVFs as obsolete.

Because each shader holds a full set of device settings, anything set directly by for example SetTransform() will not make it to the shader. The shader has its own context. Because the pipeline is programmable, you have to specify what it should do and this includes specifying how vertices are to be transformed. The most common vertex transform would be, as you said, multiply it by world, view and projection matrices respectively.

This can be done in two ways: you could multiply this matrix in your program and send it is a whole down the shader. This is efficient, because you can do the computation once and use it for all vertices. You can also send the individual three matrices to the shader and multiply them there. This is flexible, because now you can do more cool things.

Sending things to a shader can also be done in multiple ways. One way is directly setting values with SetMatrix() and such. This is inflexible, because the programming calling those functions needs to know specifically about the shader it is used with. A better way is to use the HLSL semantics, which decouple this tie.

So now, for the most simple approach we choose to:
- send the world-view-projection matrix as one matrix
- send it by semantic
- simply transform and project a vertex

float4x4 matWorldViewProjection : WORLDVIEWPROJECTION;// Define VS_INPUT and VS_OUTPUT here// ...VS_OUTPUT VSTransform( VS_INPUT vInput ){  VS_OUTPUT vOut = (VS_OUTPUT)0;  vOut.Position = mul( float4( IN.Position, 1 ), matWorldViewProjection );  return vOut;}technique TSimple{  pass p0  {        // Insert other fixed-function pipeline settings here        // ...	VertexShader = compile vs_1_1 VSTransform();	PixelShader  = NULL;  }}

Good luck! Greetz,

Illco

##### Share on other sites
Thanks, that did help a lot. Once I knew I was doing the shader correctly, I knew where else to look.

My problem was a pointer problem. I had the idea to have one copy of vertices managed by the engine and everything that uses it has a pointer to that pointer (since its stored in an array), so somewhere a long the line I decided it would be "better" to just have the vertex stream (thats what I call it) copy the vertices to the vertex buffer pointer itself, instead of calling memcpy in the renderer (which is what I was doing with the FVF). Maybe I'll change that later, who knows.

Anyway, thanks for the info. Now that I actually see something, I have a better idea of what is happening.

1. 1
2. 2
3. 3
4. 4
frob
15
5. 5

• 20
• 12
• 13
• 14
• 84
• ### Forum Statistics

• Total Topics
632143
• Total Posts
3004416

×