Managing input layouts

Started by
11 comments, last by Hodgman 8 years, 6 months ago

Hello everyone

When loading a vertex shader, i'm using the reflection api to build the corresponding input layout. In order not to create unnecessary layouts, i want to cache them in an associative container which in turn requires a key. What would be a good way to build a key that will match a layout using only the data used to create the layout ?

We think in generalities, but we live in details.
- Alfred North Whitehead
Advertisement

You could literally just build a key from the layout data by putting the attribute declarations bitwise in an integer, like so:


unsigned long long getAttributeKey(const VertexAttribute& attrib) // get a key for a single input attribute
{
    return attrib.semantic + (attrib.slot << 1) + (attrib.type << 5) + (attrib.numElements << 7) + (attrib.buffer << 10) + (attrib.instanceDivisor << 11);
}

sys::Int128 getKey(PrimitiveType type, const IGeometry::AttributeVector& vAttributes) // build combined key for primitve type & attributes
{
    sys::Int128 key;
    key.Add(0, 7, (long long)type);

    unsigned int numAttribute = 0;
    for(auto& attribute : vAttributes)
    {
        const unsigned long long attributeKey = getAttributeKey(attribute);

        key.Add(7 + numAttribute * 12, 12, attributeKey);
        numAttribute++;
    }

    return key;
}

Don't get weirded out by the syntax, I'm using a custom 128 bit integer class ( key.Add(0, 7, value) just writes the 7 bits to position 0 ). Depending on what attribute properties you want to handle (instancing isn't something you can reflect anyway) and how many attributes your vertex shaders can have, you can eigther just use a build-in 64 bit integer type, or have to use a larger data structure.

EDIT: Forget to mention explicitely, so just that it is clear... you first need to bring the reflected values in a usable integer data range for this to work (spent to much time with my abstraction layers so I didn't think about it first). What that means is that ie. you map the reflected values to a custom enum. Say like the semantics of the attribute, sv_position, texcoord, etc..., you map the strings to a custom enum value, and depending on how many different semantics you want to support, you can see how many bits you'll need in your key.

unsigned long long getAttributeKey(const VertexAttribute& attrib) // get a key for a single input attribute
{
return attrib.semantic + (attrib.slot << 1) + (attrib.type << 5) + (attrib.numElements << 7) + (attrib.buffer << 10) + (attrib.instanceDivisor << 11);
}

In the D3D11_INPUT_ELEMENT_DESC structure, "SemanticName" is a pointer (LPCSTR) which would make the result non nondeterministic. The idea of an integer key is appealing though.

Sorry, i didn't see your edit ...

Would you mind explaining what Int128::add does ? I'm not sure i understand.

We think in generalities, but we live in details.
- Alfred North Whitehead

In the D3D11_INPUT_ELEMENT_DESC structure, "SemanticName" is a pointer (LPCSTR) which would make the result non nondeterministic. The idea of an integer key is appealing though.

Sorry, just figured I should mention that and edited in. See explanation above, you would do something like this:


enum class AttributeSemantics
{
    POSITION = 0, TEXCOORD
};

AttributeSemantics convertSemantic(LPCSTR semantic)
{
    if(!strcmp(semantic, "SV_POSITION"))
        return AttributeSemantic::POSITION;
    else if(!strcmp(semantic, "TEXCOORD"))
        return AttributeSemantic::TEXCOORD;
        
    // and so on
}

This gives you the value of the bitsection for "semantic", now you have to do this for all the rest of the attributes that matter for the key and put them together, and you are done.


Would you mind explaining what Int128::add does ? I'm not sure i understand.

All it pretty much does is a bitshift, like in the first function. The only reason I need it is as I said, I need more then 64 bits for which I had a custom class. Int128::add equals to:


unsigned long long key;
key += (attributeKey << 7 + numAttribute * 12);

for a 64 bit integer variable. Whether this is enough depends on your needs - you should be able to stuff 6 input attributes in such a key with 64 bit, if you need more you need a larger integer class (or, if you have to/want to support an arbitrary number of integers you might need to use a variable sized integer class; or somebody else has an even better idea for this case).

EDIT: Alternatively in case of an arbitrary number of input attributes, you can always just store a custom struct with a vector of attribute keys for each layout, with a custom compare operator if you know what I mean.


struct LayoutKey
{

    std::vector<unsigned int> vAttributesKeys; // 32 bits is more than enough for each attribute

    bool operator<(const LayoutKey& key) const; // so the map can actually sort those
}
InputLayout is easily the worst part of D3D11.

A typical approach is to hard-code the vertex formats of your app to a small selection. This maps well to D3D9, D3D11, D3D12, OpenGL, Vulcan, etc.

With this approach in D3D11 you can make dummy shaders, compile them, copy their bytecode into a header or the like, and then use the appropriate selection from those hard-coded input layouts when compiling a material's shaders.

Sean Middleditch – Game Systems Engineer – Join my team!

I have a config file where I describe vertex structures (the inputs to vertex shaders), and stream formats (the format that vertex attributes are stored in memory).

In the same config file, I then have a list of which vertex structures will be used with which stream formats.

I can then use that config file to build D3D11 input layouts, D3D9 vertex declarations, GL VAO configs, etc...

Instead of using reflection on my vertex shaders, I just force them to declare which vertex-format from the config file they're using. I actually use a tool to convert the config file into a HLSL header file that contains these vertex structures.

When importing a model, I can see which shaders it's using, which vertex formats those shaders use, and then I can generate a list of possible stream-formats that will be compatible with those shaders. The model importer can then pick the most suitable stream-format from that list, convert the Collada/etc data to that stream format, and then record the names of the appropriate InputLayouts to use at runtime.

InputLayout is easily the worst part of D3D11.

Yes a agree.

A typical approach is to hard-code the vertex formats ...

That's what i was doing before but a more data-driven solution is needed.

I have a config file where I describe vertex structures (the inputs to vertex shaders), and stream formats (the format that vertex attributes are stored in memory).

In the same config file, I then have a list of which vertex structures will be used with which stream formats.

I can then use that config file to build D3D11 input layouts, D3D9 vertex declarations, GL VAO configs, etc...

Instead of using reflection on my vertex shaders, I just force them to declare which vertex-format from the config file they're using. I actually use a tool to convert the config file into a HLSL header file that contains these vertex structures.

When importing a model, I can see which shaders it's using, which vertex formats those shaders use, and then I can generate a list of possible stream-formats that will be compatible with those shaders. The model importer can then pick the most suitable stream-format from that list, convert the Collada/etc data to that stream format, and then record the names of the appropriate InputLayouts to use at runtime.

Yes that's a very good and simple idea. That way, it will remain flexible and data-driven, no need to modify and recompile the engine every time a vertex format is added or modified.

Thank you all for the answers and the great ideas.
We think in generalities, but we live in details.
- Alfred North Whitehead

I have a config file where I describe vertex structures (the inputs to vertex shaders), and stream formats (the format that vertex attributes are stored in memory).

In the same config file, I then have a list of which vertex structures will be used with which stream formats.

I can then use that config file to build D3D11 input layouts, D3D9 vertex declarations, GL VAO configs, etc...

Just out of curiosity, do you have pre-compiled shaders pre-generated used only to create the input layous or do you just compose a dummy vertex shader and compile it with D3DCompile when you create the layout ?

We think in generalities, but we live in details.
- Alfred North Whitehead
Yeah at the same time that I generate that header file, I also create a HLSL file containing a dummy vertex shader function for each type of vertex input structure. I then compile all these and package them up into a binary file along with all the D3D11_INPUT_ELEMENT_DESC structures for the game to use at runtime.

I was a bit worried about the optimizer, so the dummy code casts every input attribute to a float4, adds them all together, and returns the sum as SV_POSITION.
I obviously never use these vertex shaders, except as the validation argument that's required when making an input-layout...

IMHO, that's the only terrible part of D3D input layouts. By doing this, I'm telling D3D "trust me, I'll be careful to only use this with matching vertex shaders", which is 100% allowed... So, I should be able to make that same promise by passing a null pointer for the VS validation argument :(

IMHO, that's the only terrible part of D3D input layouts. By doing this, I'm telling D3D "trust me, I'll be careful to only use this with matching vertex shaders", which is 100% allowed... So, I should be able to make that same promise by passing a null pointer for the VS validation argument sad.png

Yes a agree, this is the kind of validations that the new generation of graphics api seems to be done with. Other then that D3D11 is indeed a very neat api.

For my part i generate a dummy vs from the description structures when i create the layout. The vs would look like this


struct VS_INPUT
{
	float3 a0 : POSITION;
	float2 a1 : TEXCOORDS0;
	float3 a2 : NORMAL;
};

struct VS_OUTPUT
{
	float4 position : SV_Position;
};

VS_OUTPUT main(in VS_INPUT input)
{
	VS_OUTPUT output;
	output.position.xyzw = 0.0f;
	return output;
}

It seems to be working but you think the optimizer could break it ?

We think in generalities, but we live in details.
- Alfred North Whitehead

This topic is closed to new replies.

Advertisement