Sign in to follow this  
Decibit

D3D 10 Input Variable Size

Recommended Posts

Hi! I'm looking for a way to get the size of a HLSL shader input variable size. The functionality seems to be missing in DX10 though it is quite easy to do for a shader constant (using the interface ID3D10ShaderReflectionVariable). Does anybody know how to do that for the input variables? Any help would be much appreciated.

Share this post


Link to post
Share on other sites
ID3D10EffectTechnique* gp_Technique;

//
//LOAD THE TECHNIQUE
//

D3D10_PASS_DESC pass_desc;
gp_Technique->GetPassByIndex(0)->GetDesc( &pass_desc);
x = pass_desc.IAInputSignatureSize

Share this post


Link to post
Share on other sites
Thanks for the reply!
Unfortunately the proposed method is not what I'm looking for. It gets the size of the entire input signature but I need to query the size of the separate variables.
struct app2vs
{
float3 v : Vertex;
float2 t : TexCoord;
float3 n : Normal;
};

Is it possible to query the size of Vertex, TexCoord and Normal and get accrodingly 12, 8 and again 12 bytes?

Share this post


Link to post
Share on other sites
I do not believe that you can query separate variables in the manner you are trying to. The shader input signatures on the CPU do not have any concept of the shader function signature except what you supply while setting up the input layout objects, which is why you need to set them up in the first place.

It should be very easy for you to determine the size of the variable though, floats are 4 bytes, a float3 is 12 bytes, etc. There is also HLSL documentation which describes the size of the types.

Share this post


Link to post
Share on other sites
You should be able to get this per pass,shader stage.



ID3DX11Effect::GetTechniqueByName();
ID3D11EffectTechnique::GetPassByIndex();
ID3D11EffectPass::Get<shaderstage>Desc();
ID3D11EffectShaderVariable::GetInputSignatureElementDesc();

Will that work? You can iterate over those and figure out the sizes and placement. Should work for the vertex shader just fine.

Share this post


Link to post
Share on other sites
Quote:
Original post by DieterVW
You should be able to get this per pass,shader stage.

ID3DX11Effect::GetTechniqueByName();
ID3D11EffectTechnique::GetPassByIndex();
ID3D11EffectPass::Get<shaderstage>Desc();
ID3D11EffectShaderVariable::GetInputSignatureElementDesc();

Will that work? You can iterate over those and figure out the sizes and placement. Should work for the vertex shader just fine.

Unfortunately the data structure returned by GetInputSignatureElementDesc contains no field that describes the size.

Quote:
Original post by mososky
It should be very easy for you to determine the size of the variable though, floats are 4 bytes, a float3 is 12 bytes, etc. There is also HLSL documentation which describes the size of the types.

I don't know the variable types beforehand. The DX SDK lets you determine the basic types (int, float etc.) but hides how many of these are there.

Share this post


Link to post
Share on other sites
Actually, the input signature does contain everything necessary to figure out what is being streamed in.

Just to help with the reference, here is the D3D11_SIGNATURE_PARAMETER_DESC definition.


typedef struct D3D11_SIGNATURE_PARAMETER_DESC {
LPCSTR SemanticName;
UINT SemanticIndex;
UINT Register;
D3D10_NAME SystemValueType;
D3D10_REGISTER_COMPONENT_TYPE ComponentType;
BYTE Mask;
BYTE ReadWriteMask;
UINT Stream;
} D3D11_SIGNATURE_PARAMETER_DESC;



The D3D11_SIGNATURE_PARAMETER_DESC tells you many things. One understandable difficulty in using this is that the parameters will show up across many registers due to the way streamed data is packed. The packing rules are probably not well known but are important to making streamed data work.

First, to determine the data type you can look at the .ComponentType member of the desc object. This will tell us if the data is int, uint, or a float.

Next, streamed input registers come in sets of four. When looking at the desc object you can analyze the .Mask member to determine which of the xyzw registers are being used by the current parameter. Using this information along with the previous type info we can start to determine if we have a float3 or an int2 etc.

Now things start to get a little more complicated depending on what it is that you want to know. If all you are wondering is how big the total input signature stream is per shader invocation then you can actually just scan all of the desc objects for the largest .Register #, then look at the mask value. Total size in bytes would be sizeof(int) * ((.Register - 1) + numbitssetin(.Mask)). The stream data will have used that much space to transfer in the data.

To figure out a specific parameter's size you have to do a bit more work. A parameter of something like float4x4 is too big go fit in one set of stream registers, so it is spread across several. In this case it would be spread across 4 registers, and will require 4 D3D11_SIGNATURE_PARAMETER_DESC objects to describe it. The data will appear in a contiguous set of registers to simplify things a bit. Each one of these desc objects will have the same .SemanticName and an incremented SemanticIndex. In this case the indexes would be 0-3. Reconstructing the float4x4 input parameter requires all of these desc objects be combined.

Since there are packing rules things can get a little trickier. For instance, if we start with something like the following for our input signature:


float4 main( float p0 : Param, float2x2 p1 : FunVar, float p2 : Another ) : SV_Position
{
float4x4 ret = (float4x4)0;
(float2x2)ret = p1;
ret._13 = p0;
ret._44 = p2;
return mul( ret, float4( p1._11, p1._12, p1._21, p1._22 ) );
}





We will receive 4 D3D11_SIGNATURE_PARAMETER_DESC objects to analyze. They will be something like:


{ "Param", 0, 0, D3D10_NAME_UNDEFINED, D3D10_REGISTER_COMPONENT_FLOAT32, 0x1, 0, 0 },
{ "FunVar", 0, 0, D3D10_NAME_UNDEFINED, D3D10_REGISTER_COMPONENT_SINT32, 0x6, 0, 0 },
{ "Another", 0, 0, D3D10_NAME_UNDEFINED, D3D10_REGISTER_COMPONENT_FLOAT32, 0x8, 0, 0 },
{ "FunVar", 1, 1, D3D10_NAME_UNDEFINED, D3D10_REGISTER_COMPONENT_SINT32, 0x6, 0, 0 }


Now that may not be exactly what the compiler does, but it's something that the compiler can do when packing these variables. To figure out that FunVar was a int2x2 you would have to put all of this together. Notice how FunVar is packed in the yz components of two different registers.

If you use fxc.exe to compile your shader on the command line you will get to see a print out of the input and output signatures and you can see what the packing is doing for your shader on these streams. That should help you enough to figure out the size information you need from signatures.


[Edited by - DieterVW on March 10, 2010 12:47:50 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by DieterVW
Next, streamed input registers come come in sets of four. When looking at the desc object you can analyze the .Mask member to determine which of the xyzw registers are being used by the current parameter. Using this information along with the previous type info we can start to determine if we have a float3 or an int2 etc.

Excellent! That's exactly what I was looking for! Thanks for the wonderful explanation. I've just checked that the method works with DX 10 also.


Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this