# [d3d9/10] Moving away from Effects

## Recommended Posts

16bit_port    180
After doing some research, it appears that effects are too slow and should be avoided.

With that said, I haven't found any tutorials on how to use shaders without effects, except for one. Any links to tutorials that uses the shader pipeline and does not use effects (preferably D3D10) is greatly appreciated.

Anyway, I have quite a few questions...

When rendering with D3D9 :
1) I don't need any technique code like
technique ShaderModel2_Technique{    pass P0    {        vertexShader = compile ...        pixelShader  = compile ...    }}

since I'll tell it how to compile the shaders with D3DXCompileShaderFromFile, right?

2) How do I do multiple passes with the shaders?
What I mean is with effects you would do something like this :
for( unsigned m = 0; m < NumPasses; m++ ){    Effect->BeginPass( m );

how do I do it without effects?

When rendering with D3D10 :
Do I need to have technique code inside my shader? Because when rendering with the effects, I had to do something like this when creating the input layout :
D3D10_PASS_DESC PassDesc;Tech->GetPassByIndex( 0 )->GetDesc( &PassDesc );m_pd3dDevice->CreateInputLayout( pLayout, dwNumInputElements, PassDesc.pIAInputSignature, ... );

Compiling and creating the shaders seem simple enough, I just need to call "D3DX10CompileFromFile" and "ID3D10Device::CreateVertexShader"/"ID3D10Device::CreatePixelShader", but how do I do everything else like :

1) interlacing with (point to) the variables inside a shader? Is there an equivalent to ID3DXCONSTANTTABLE?

2) setting the variables inside the shader

and most importantly

##### Share on other sites
Erik Rufelt    5901
Quote:
 Original post by 16bit_portAfter doing some research, it appears that effects are too slow and should be avoided.

They're not slow at all, and should not be avoided unless you have a good reason.

Quote:
 2) How do I do multiple passes with the shaders?What I mean is with effects you would do something like this :*** Source Snippet Removed ***how do I do it without effects?

Render();
RenderAgain();

Quote:
 When rendering with D3D10 :Do I need to have technique code inside my shader?

Techniques are effects, without effects you don't have techniques. You need to pass the byte-code you get from D3DCompile for the vertex-shader when creating the input layout.

Quote:
 1) interlacing with (point to) the variables inside a shader? Is there an equivalent to ID3DXCONSTANTTABLE?

Create a constant buffer and Map it to fill it with your data.

Quote:

##### Share on other sites
16bit_port    180
Quote:
 Original post by Erik RufeltThey're not slow at all, and should not be avoided unless you have a good reason.

A couple of people said that effects were slow.
and
I remember reading that effects were not supported in D3D11 but were later due to educational purposes or something like that.

[Edited by - 16bit_port on October 14, 2010 4:48:16 PM]

##### Share on other sites
MJP    19786
1. With both D3D9 and D3D10 you no longer have techniques or passes without effects. Instead you individually compile the shaders that would be part of each pass, by passing the shader profile and the entry point function to compiler (which is essentially what you declare in your pass using an effect).

2. If you want some sort of pass functionality, you do it yourself. You must compile the shader or shaders required, and then for each pass set up the shader + constants and then draw your geometry. All BeginPass does is set the required shaders onto the device, and set all constants/resources that need to be bound for the shaders.

3. When creating an input layout, you just need a pointer to the shader bytecode. You'll get the bytecode back from the D3DCompile/D3DX10CompileFromFile functions.

4. D3D10 and D3D11 have an extensive reflection API, which can be used to gather constant data as well as all sorts of data about your shaders. Check out D3DX10ReflectShader and the ID3D10ShaderReflection interface. OF course with D3D10/D3D11 things are more complicated because you have the concept of concept buffers rather than just individual constants. For best performance you really want to deal with constants in terms of their entire constant buffer, rather than always setting one constant at a time. But if you want to still go with the latter approach for simplicity, it's possible to get all of the information you need using reflection. If you want you can look at any of my samples on my blog for a very simple approach to dealing with constant buffers, or you can look at Hieroglyph3 for a more complex approach that still lets you deal with individual constants/parameters.

5. With D3D9 you set constants using SetVertexShaderConstantF/SetPixelShaderConstantF, which sets a value onto one or more constant registers. You set textures using SetTexture, and sampler states using SetSamplerState.

With D3D10 and D3D11 you have to create the necessary constant buffer, Map it to set data into it, and then bind it to a slot for the shader stage that needs it (for instance you call PSSetConstantBuffers to bind constant buffers to the pixel shader stage). Textures are bound using *SetShaderResources, and sampler states are set using *SetSamplers.

6. To render with a shader you simply bind the shader for the appropriate shader stage, set the constants and resources for each stage, and then draw.

##### Share on other sites
16bit_port    180
I'm looking at the declaration of IDirect3DDevice9::SetVertexShaderConstantF and I'm not entirely sure what I should be doing with the register parameter.

##### Share on other sites
MJP    19786
In D3D9 all constants are mapped to a constant register. They can be mapped explicitly in your shader code (by using the register binding syntax), or automatically by the compiler. If you have a constant table available you can use it to query the constant register for any particular constant.

Each constant register is a float4, and you have to set all 4 float's of a register at once. So if you have a constant that's only 1 float, you must set all 4 floats of that register (although it doesn't matter what you set the other components to, since they won't be used). Some constants can take up multiple registers, for instance a 4x4 matrix will occupy 4 registers. SetVertexShaderConstantF lets you set multiple registers at once, so you can set a single matrix in one call.

##### Share on other sites
16bit_port    180
Why explicitly map a constant to a register? Why not let the compiler do it for you?

Also, if I let the compiler do it for me, what do I specify in that argument then?

Quote:
 Some constants can take up multiple registers, for instance a 4x4 matrix will occupy 4 registers.

Any reason why anyone would do that instead of using IDirect3DDevice9::SetTexture?

##### Share on other sites
MJP    19786
Quote:
 Original post by 16bit_portWhy explicitly map a constant to a register? Why not let the compiler do it for you?

So that you can know the register for a constant without having to reflect it, and/or so you can make sure the same constant gets mapped to the same register across multiple shaders so that you don't have to set it multiple times. So for instance if you have a view matrix that never changes during a frame, you could set it once at the beginning of the frame and that's it.

Quote:
 Original post by 16bit_portAlso, if I let the compiler do it for me, what do I specify in that argument then?

Like I said, you can use the constant table to retrieve the register index.

Quote:
 Original post by 16bit_portAny reason why anyone would do that instead of using IDirect3DDevice9::SetTexture?

Huh? What does setting a matrix as a constant have to do with setting a texture?

##### Share on other sites
16bit_port    180
Quote:
 Huh? What does setting a matrix as a constant have to do with setting a texture?

Whoops! Copied the wrong text. I meant IDirect3DDevice9::SetTransform - but then again I just remembered that that's for the fixed pipeline. Was quickly browsing through the device interfaces and hastily assumed that it for the shaders without really looking at it. So... nevermind about that.

##### Share on other sites
MJP    19786
Quote:
Original post by 16bit_port
Quote:
 Huh? What does setting a matrix as a constant have to do with setting a texture?

Whoops! Copied the wrong text. I meant IDirect3DDevice9::SetTransform - but then again I just remembered that that's for the fixed pipeline. Was quickly browsing through the device interfaces and hastily assumed that it for the shaders without really looking at it. So... nevermind about that.

Oh right, that makes a lot more sense. :P

And yeah it's for fixed-function processing only. For shaders you have to set world/view/projection transforms as shader constants.

##### Share on other sites
16bit_port    180
So I've looked into shader reflection and it seems to me that it serves 3 purposes (and please correct me if I'm wrong) :

1) used to grab data from the shaders (possibly for debugging purposes)
2) used to validate if 2 more shaders are compatible with each other. so in the case of a vertex shader and a pixel shader, it can be used to see if the vs' output struct matches ps' input struct?

3) used to dynamically create constant buffers in the renderer? With shader reflection, you can see what constant buffers and its variables are in that particular shader (as well as texture buffers). But even though you know the variable's type, name, etc, how exactly do you use that data to dynamically create a constant buffer?

My other questions are :
It seems like shader reflection is completely optional and isn't required to create a constant buffer?

When defining a "hard-coded" constant buffer, do the variables in the application
//.cppstruct CB_VS_PER_OBJECT{    D3DXMATRIX m_WorldViewProj;    D3DXMATRIX m_World;};

have to exactly match the constant buffer in the shader (order and type)?
//.fxcbuffer cbPerObject : register( b0 ){	matrix		g_mWorldViewProjection	: packoffset( c0 );	matrix		g_mWorld		: packoffset( c4 );};

##### Share on other sites
MJP    19786
Quote:
 Original post by 16bit_portSo I've looked into shader reflection and it seems to me that it serves 3 purposes (and please correct me if I'm wrong) : 1) used to grab data from the shaders (possibly for debugging purposes)2) used to validate if 2 more shaders are compatible with each other. so in the case of a vertex shader and a pixel shader, it can be used to see if the vs' output struct matches ps' input struct?

Yup you can do both of those things. You can also look at the input signature of a vertex shader, and determine whether a vertex buffer contains the necessary elements. So for instance if a normal mapping shader requires a tangent frame, you can use reflection to determine that the shader requires tangents and binormals.

Quote:
 Original post by 16bit_portand lastly I'm not too sure about this one :3) used to dynamically create constant buffers in the renderer? With shader reflection, you can see what constant buffers and its variables are in that particular shader (as well as texture buffers). But even though you know the variable's type, name, etc, how exactly do you use that data to dynamically create a constant buffer?My other questions are :It seems like shader reflection is completely optional and isn't required to create a constant buffer?When defining a "hard-coded" constant buffer, do the variables in the application*** Source Snippet Removed ***have to exactly match the constant buffer in the shader (order and type)?*** Source Snippet Removed ***

Ultimately the data that you copy into the mapped ID3D10Buffer has to exactly match the data layout specified in your shader code. So if your shader specifies a buffer with a float4x4 and a float3, then you need to copy 19 floats into that constant buffer (and your constant buffer has to be the size of 19 floats.

So if you use reflection, you can figure out the size and offset (from the start of the constant buffer) of each variable. So sticking with our simple case of a float4x4 and a float3...if you reflected that data, you would know that you would need a constant buffer that's (16 * 4) + (3 * 4) bytes in size. You would also know the name of the constants...so for instance if you had some code that took the string name and a value, you could say "okay so the 'Color' parameter is 12 bytes in size, and is located 64 bytes into the constant buffer". Then you could use that offset and size to memcpy the data into the constant buffer.

If you want to hard-code the constant buffer, then the easiest way to do so is to create a struct whose memory layout exactly matches the layout in your shader. That way you can just use sizeof() to determine the size for your ID3D10Buffer, and also you can just memcpy in the whole struct when mapping the buffer (or you can just cast the pointer you get from Map to your struct type). You have to be careful though, because unless you specify otherwise in your shader code your constants will be aligned to float4 boundaries. So for instance if you had a few float2's in your constant buffer, you would have to align the corresponding D3DXVECTOR2's in your struct to 16 bytes. You can use __declspec(align(16)) with VC++ to do this.

EDIT: look for the page titled "Packing Rules for Constant Variables" in your SDK docs for the rules on how constants are aligned.

[Edited by - MJP on October 15, 2010 7:49:42 PM]