How to cleanly create input layouts

Started by
8 comments, last by cozzie 6 years, 12 months ago

Currently I've got a scheme where I load shaders on another thread and marshall them back to the main thread as a ID3D11[InsertTypeHere]Shader. This presents a problem with vertex shaders, as I don't have the blobs at that point and still need to (maybe) create input layouts for them when I create the vertices they're going to operate on. This seems like an architectural mess because shaders are independent of geometry and I want to be able to load my shaders as such. I've come up with a bunch of ideas for how to approach this, but none of them seem great:

1. Keep the blobs around forever so that I can pull them up when I want to couple the geometry to the shaders later. This seems stupid because all I really want is the signature.

2. Create "dummy" interface shaders that I know about at compile-time and can reliably generate input layouts with, then just bucket my real shaders with the dummies so that I know what input layouts to use. I think this would work, and requires me to store far fewer blobs, but it does seem pretty silly.

3. Bind shaders tightly to geometry in data and don't load shaders independently at all, but have the loader recursively figure all that out on loading geometry and generate the appropriate layouts all at the same time. I'd rather not do this because of the loss of flexibility.

4. Just use a couple standardized vertex input formats for all of my shaders and don't worry about figuring things out dynamically. I will probably have this anyway, but relying on this knowledge seems more like a hack than anything else.

5. Use shader reflection. I know this is a thing and I know that you can get the signature, but I don't know if the signature format is compatible with CreateInputLayout() and I don't know the performance implications of this.

Am I just trying to build this all wrong and should just give up on trying to load shaders independently of vertices? Is there an obvious solution staring me in the face that I've somehow missed?

Advertisement

You want a joke ? The input layout does not exist, it is a lie :) This used to be a fixed pipeline operation, but now, shaders just read the memory.

There is the dx11 way, the driver shader will jump to a small fetch shader to resolve the binding ( to spare us for incessant shader compilation per input layout flavor ).

There is the dx12 way, the input layout is part of the pipeline state object so the compilation can pre resolve everything, it is a little perf improvement over dx11 fetch shaders.

It means you should not consider shader as separate entities, but as a whole, even in dx11, grouped together, with the input layout, and even if possible all the various states (depth/blend/etc).

As of how you remember or determine what layout you need for a specific group of shader, you are free of choice, hardcode, reflection, own system, ...

I do #2. For each VS input structure, I generate a dummy VS which reads all the attributes and sums them together (to thwart the optimizer from culling any inputs).
Then for each model-data storage structure, I iterate over the compatible VS input structures and pre-create all my input layouts.

When preparing a draw item, I ask the VS for its input structure and the model for it's storage structure, then fetch the correct IL.

Yeah it feels a bit dirty, but is straightforward if you've got some kind of custom shader compiler toolchain in your engine already. You should ideally automate as much as possible, such as detecting whether a VS uses a particular VS input structure, etc...
In my engine, I declare VS input structures and model-storage structures via config files (in lua format), which are then used to generate shader code (HLSL structs, for VS's to use as input), to inform the geometry importer how to store models within vertex buffers, and to inform the engine which IL's are required.

And yeah, this is just a silly D3D11 validation hoop to jump through. It's because they didn't create a way for us to specify vertex shader attributes except through HLSL->Bytecode.

You want a joke ? The input layout does not exist, it is a lie This used to be a fixed pipeline operation, but now, shaders just read the memory.

Depends on the hardware. For GCN this is true but I'm not so sure about the entire ecosystem.

#2 is silly but works fine; you can even do it the other way round and create the dummy VS from the input element desc array.

I do something similar to #3 - VS and input layout are tightly coupled. In theory it seems less flexible; in practice it's not actually a problem for me at all.

#5 seems attractive but the one thing that reflection won't give you is the per-instance vs per-vertex stuff and the step rate. You'll need to find a way of manually specifying those yourself if you use instancing, so I can't really recommend #5 in any kind of good faith.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I allow a shader to be supplied at IA creation time. If one is not supplied, I generate a shader matching the IA and submit it. Option 2, in other words. Here's my simple hackjob of a generator:


if(!shader.IsValid())
	{
		//we will now attempt to fabricate a matching shader for the vertex signature
		//this is kind of awful but oh well
		std::string elementsStr;
		for(int i = 0; i < elementCount; ++i)
		{
			const VertexElement& ve = elements[i];
			if(ve.Type == DataFormat::Float32)
				elementsStr += "float";
			else if(ve.Type == DataFormat::R32_Int)
				elementsStr += "int";

			char buf[16];
			itoa(ve.Size, buf, 10);
			elementsStr += buf;

			elementsStr += " element";
			itoa(i, buf, 10);
			elementsStr += buf;

			elementsStr += " : ";
			elementsStr += SemanticNameForIndex(ve.Index);
			elementsStr += ";\n";
		}

		const char* shaderTemplate = " \
		struct SimpleVertex \
		{ \
			%s \
		}; \
 \
		float4 Vertex(SimpleVertex vi) : SV_POSITION \
		{ \
			return float4(0.0, 0.0, 0.0, 0.0); \
		} \
 \
			float4 Pixel() : SV_TARGET \
		{ \
			return float4(1.0, 1.0, 1.0, 1.0); \
		}";
		char shaderBuf[1024];
		sprintf(shaderBuf, shaderTemplate, elementsStr.c_str());
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

I allow a shader to be supplied at IA creation time. If one is not supplied, I generate a shader matching the IA and submit it. Option 2, in other words. Here's my simple hackjob of a generator:

Heh, I like the "this is kind of awful but oh well" comment. I feel like I end up writing something like that every time I write hacky string building code.

Though it might be stupid, it seems like this is a common solution to the problem, so I'll probably end up doing something similar. Thanks, everyone.

In the general case, mesh<->IL, mesh<->VS, and VS<->IL are all many:many relationships.

Depending on your needs, you can simplify IL management a lot.
I'd ask things like:
* will you ever render the same mesh with two different VS's which read different attributes? e.g. forward shading VS reads pos/normal/uv, Z-only VS reads pos.
-- 1:1 mesh<->IL design will require you to duplicate your mesh (forward shading mesh, Z-only mesh). I've seen this done :)
-- 1:1 VS<->IL requires that your game's vertex format is predicable across all meshes (e.g. don't let some meshes store pos,normal,uv, while other meshes store pos,uv,normal).

* Will you ever use two different model storage formata with a single VS?
--1:1 VS<->IL requires that you duplicate these particular VSs so you can treat them as different objects that each have a single IL.

If both questions answer no, then it's simple for you to just use 1:1 relationships.

Another option is to simple not make use of the IA stage whatsoever and implement vertex fetching with your own code. This is totally feasible on Dx11, and has zero perf penalty on a lot of GPU's (on some you might actually see a perf gain)... This would mean binding vertex buffers as SRVs, as you do for textures.

This topic is closed to new replies.

Advertisement