Sign in to follow this  
Laval B

Managing input layouts

Recommended Posts

Laval B    12387

Hello everyone

 

When loading a vertex shader, i'm using the reflection api to build the corresponding input layout. In order not to create unnecessary layouts, i want to cache them in an associative container which in turn requires a key. What would be a good way to build a key that will match a layout using only the data used to create the layout ?

Share this post


Link to post
Share on other sites
Juliean    7077

You could literally just build a key from the layout data by putting the attribute declarations bitwise in an integer, like so:

unsigned long long getAttributeKey(const VertexAttribute& attrib) // get a key for a single input attribute
{
    return attrib.semantic + (attrib.slot << 1) + (attrib.type << 5) + (attrib.numElements << 7) + (attrib.buffer << 10) + (attrib.instanceDivisor << 11);
}

sys::Int128 getKey(PrimitiveType type, const IGeometry::AttributeVector& vAttributes) // build combined key for primitve type & attributes
{
    sys::Int128 key;
    key.Add(0, 7, (long long)type);

    unsigned int numAttribute = 0;
    for(auto& attribute : vAttributes)
    {
        const unsigned long long attributeKey = getAttributeKey(attribute);

        key.Add(7 + numAttribute * 12, 12, attributeKey);
        numAttribute++;
    }

    return key;
}

Don't get weirded out by the syntax, I'm using a custom 128 bit integer class ( key.Add(0, 7, value) just writes the 7 bits to position 0 ). Depending on what attribute properties you want to handle (instancing isn't something you can reflect anyway) and how many attributes your vertex shaders can have, you can eigther just use a build-in 64 bit integer type, or have to use a larger data structure.

 

EDIT: Forget to mention explicitely, so just that it is clear... you first need to bring the reflected values in a usable integer data range for this to work (spent to much time with my abstraction layers so I didn't think about it first). What that means is that ie. you map the reflected values to a custom enum. Say like the semantics of the attribute, sv_position, texcoord, etc..., you map the strings to a custom enum value, and depending on how many different semantics you want to support, you can see how many bits you'll need in your key.

Edited by Juliean

Share this post


Link to post
Share on other sites
Laval B    12387

unsigned long long getAttributeKey(const VertexAttribute& attrib) // get a key for a single input attribute
{
    return attrib.semantic + (attrib.slot << 1) + (attrib.type << 5) + (attrib.numElements << 7) + (attrib.buffer << 10) + (attrib.instanceDivisor << 11);
}

 

In the D3D11_INPUT_ELEMENT_DESC structure, "SemanticName" is a pointer (LPCSTR) which would make the result non nondeterministic. The idea of an integer key is appealing though.

 

Sorry, i didn't see your edit ...

 

Would you mind explaining what Int128::add does ? I'm not sure i understand.

Edited by Laval B

Share this post


Link to post
Share on other sites
Juliean    7077

In the D3D11_INPUT_ELEMENT_DESC structure, "SemanticName" is a pointer (LPCSTR) which would make the result non nondeterministic. The idea of an integer key is appealing though.

 

Sorry, just figured I should mention that and edited in. See explanation above, you would do something like this:

enum class AttributeSemantics
{
    POSITION = 0, TEXCOORD
};

AttributeSemantics convertSemantic(LPCSTR semantic)
{
    if(!strcmp(semantic, "SV_POSITION"))
        return AttributeSemantic::POSITION;
    else if(!strcmp(semantic, "TEXCOORD"))
        return AttributeSemantic::TEXCOORD;
        
    // and so on
}

This gives you the value of the bitsection for "semantic", now you have to do this for all the rest of the attributes that matter for the key and put them together, and you are done.

 


Would you mind explaining what Int128::add does ? I'm not sure i understand.

 

All it pretty much does is a bitshift, like in the first function. The only reason I need it is as I said, I need more then 64 bits for which I had a custom class. Int128::add equals to:

unsigned long long key;
key += (attributeKey << 7 + numAttribute * 12);

for a 64 bit integer variable. Whether this is enough depends on your needs - you should be able to stuff 6 input attributes in such a key with 64 bit, if you need more you need a larger integer class (or, if you have to/want to support an arbitrary number of integers you might need to use a variable sized integer class; or somebody else has an even better idea for this case).

 

EDIT: Alternatively in case of an arbitrary number of input attributes, you can always just store a custom struct with a vector of attribute keys for each layout, with a custom compare operator if you know what I mean.

struct LayoutKey
{

    std::vector<unsigned int> vAttributesKeys; // 32 bits is more than enough for each attribute

    bool operator<(const LayoutKey& key) const; // so the map can actually sort those
}
Edited by Juliean

Share this post


Link to post
Share on other sites
SeanMiddleditch    17565
InputLayout is easily the worst part of D3D11.

A typical approach is to hard-code the vertex formats of your app to a small selection. This maps well to D3D9, D3D11, D3D12, OpenGL, Vulcan, etc.

With this approach in D3D11 you can make dummy shaders, compile them, copy their bytecode into a header or the like, and then use the appropriate selection from those hard-coded input layouts when compiling a material's shaders.

Share this post


Link to post
Share on other sites
Hodgman    51324

I have a config file where I describe vertex structures (the inputs to vertex shaders), and stream formats (the format that vertex attributes are stored in memory).

In the same config file, I then have a list of which vertex structures will be used with which stream formats.

I can then use that config file to build D3D11 input layouts, D3D9 vertex declarations, GL VAO configs, etc...

 

Instead of using reflection on my vertex shaders, I just force them to declare which vertex-format from the config file they're using. I actually use a tool to convert the config file into a HLSL header file that contains these vertex structures.

 

When importing a model, I can see which shaders it's using, which vertex formats those shaders use, and then I can generate a list of possible stream-formats that will be compatible with those shaders. The model importer can then pick the most suitable stream-format from that list, convert the Collada/etc data to that stream format, and then record the names of the appropriate InputLayouts to use at runtime.

Share this post


Link to post
Share on other sites
Laval B    12387

 

 

InputLayout is easily the worst part of D3D11.

 

Yes a agree.

 

 

 

A typical approach is to hard-code the vertex formats ...

 

That's what i was doing before but a more data-driven solution is needed.

 

 

I have a config file where I describe vertex structures (the inputs to vertex shaders), and stream formats (the format that vertex attributes are stored in memory).

In the same config file, I then have a list of which vertex structures will be used with which stream formats.

I can then use that config file to build D3D11 input layouts, D3D9 vertex declarations, GL VAO configs, etc...

 

Instead of using reflection on my vertex shaders, I just force them to declare which vertex-format from the config file they're using. I actually use a tool to convert the config file into a HLSL header file that contains these vertex structures.

 

When importing a model, I can see which shaders it's using, which vertex formats those shaders use, and then I can generate a list of possible stream-formats that will be compatible with those shaders. The model importer can then pick the most suitable stream-format from that list, convert the Collada/etc data to that stream format, and then record the names of the appropriate InputLayouts to use at runtime.

 

Yes that's a very good and simple idea. That way, it will remain flexible and data-driven, no need to modify and recompile the engine every time a vertex format is added or modified.

 
 
Thank you all for the answers and the great ideas.

Share this post


Link to post
Share on other sites
Laval B    12387

I have a config file where I describe vertex structures (the inputs to vertex shaders), and stream formats (the format that vertex attributes are stored in memory).

In the same config file, I then have a list of which vertex structures will be used with which stream formats.

I can then use that config file to build D3D11 input layouts, D3D9 vertex declarations, GL VAO configs, etc...

 

Just out of curiosity, do you have pre-compiled shaders pre-generated used only to create the input layous or do you just compose a dummy vertex shader and compile it with D3DCompile when you create the layout ?

Edited by Laval B

Share this post


Link to post
Share on other sites
Hodgman    51324
Yeah at the same time that I generate that header file, I also create a HLSL file containing a dummy vertex shader function for each type of vertex input structure. I then compile all these and package them up into a binary file along with all the D3D11_INPUT_ELEMENT_DESC structures for the game to use at runtime.

I was a bit worried about the optimizer, so the dummy code casts every input attribute to a float4, adds them all together, and returns the sum as SV_POSITION.
I obviously never use these vertex shaders, except as the validation argument that's required when making an input-layout...

IMHO, that's the only terrible part of D3D input layouts. By doing this, I'm telling D3D "trust me, I'll be careful to only use this with matching vertex shaders", which is 100% allowed... So, I should be able to make that same promise by passing a null pointer for the VS validation argument :(

Share this post


Link to post
Share on other sites
Laval B    12387

IMHO, that's the only terrible part of D3D input layouts. By doing this, I'm telling D3D "trust me, I'll be careful to only use this with matching vertex shaders", which is 100% allowed... So, I should be able to make that same promise by passing a null pointer for the VS validation argument sad.png

 

Yes a agree, this is the kind of validations that the new generation of graphics api seems to be done with. Other then that D3D11 is indeed a very neat api.

 

For my part i generate a dummy vs from the description structures when i create the layout. The vs would look like this

struct VS_INPUT
{
	float3 a0 : POSITION;
	float2 a1 : TEXCOORDS0;
	float3 a2 : NORMAL;
};

struct VS_OUTPUT
{
	float4 position : SV_Position;
};

VS_OUTPUT main(in VS_INPUT input)
{
	VS_OUTPUT output;
	output.position.xyzw = 0.0f;
	return output;
}

It seems to be working but you think the optimizer could break it ?

Edited by Laval B

Share this post


Link to post
Share on other sites
KaiserJohan    2317

Sorry to hijack the thead but does binding several input slots but only using some of them in the shader incur any performance penalties?

 

For example: Bind vertex Position/Normal/Texcoord slots but shader only uses Position/Normals?

Share this post


Link to post
Share on other sites
Hodgman    51324
On old HW, the GPU might possibly waste a few cycles per vertex fetching those unused attributes (or alternatively, will waste a bunch of CPU cycles per draw-call comparing the IA/VS config and disabling the unused attribs).

On new HW, IA doesn't exist, so the IA config is compiled into VS code and glued onto the front of your vertex shader. Changing input layouts should be treated the same as a shader change on the GPU side (potential pipeline flush if done too frequently).
Whether unused attribs cause harm depends on the driver -- either it can compile each "IA shader" once, which will cause all attribs to alwayd be fetched... OR, it can recompile each "IA shader"+VS pair, resulting in unused attribs being optimized out.

I'm not sure what modern PC drivers do... FWIW, in my current-gen console ports, I use the former option (compile each IA once), and assume the graphics programmer will use the best IA config for their VS (no unused attribs).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this