[D3D12] Handle different vertex input layouts

Started by
3 comments, last by Juliean 3 months, 1 week ago

Hi everybody!
It's common for example to have vertex color or not. Having different UV set.
When creating the graphics pipeline you need to set the input layout. Vertex Shader too.
What is the best way to handle all input layout?
Thanks!

Advertisement

Every vertex buffer has its own vertex layout, so specifying different ones shouldn't be more complicated than specifying identical ones. What do you find difficult to manage?

Omae Wa Mou Shindeiru

Let say you have a material shader with a vertex shader which uses: pos, normal, uv.
But you want to be able to use this material shader for buffer with vertex color too: pos, normal, uv, color.
Since you have your vertex shader defined without color, the input layout without color and the graphics pipeline using the input layout without color, then it's not generic.
The question is if it's possible to write differently to have material shaders generic for any input layout.

Alundra said:
The question is if it's possible to write differently to have material shaders generic for any input layout.

Sure. I can quickly describe how I handle this scenario, maybe it can give you some ideas:

First, I have the declaration of all the attributes that a certain type of object can support. I call it “geometry declaration”. You specify everything that can be available - position, color, normal, tangend, binormal, uv1, uv2, you name it. This

The shader will then need to link against such a “geometry declaration”, in my case this is more generalized via a so called “profile”. Profile describes the type of renderable that a shader is used for - you could have “sprite”, “postprocess”, “mesh … since I have a visual shader graph primarily, this is similar to what Unreal allows you to set. So for your example, we'd say that our shader uses the “mesh” profile (with the attributes mentioned before).
The shader will then request attributes based on what it does. If you have a normal-map, it will request normals, binormals, tangents. If you have no normal map, but lighting, it will only request lighting. Unlit? No normals needed. This can also be different for certain passes. A shadow/z-pre-pass will only request position.
Then, when we have an actual mesh with one of these shaders, we will “compile” a Renderable. Upon compilation, we look at all the available attributes, and all that are requested, and create an matching input-layout. That means that certain attributes in the buffers can simply be ignored; or it could be that data is split into different buffers (for example, position-data from everything else, to make rendering the aformentioned position-only passes faster; as those then only need to physically access the position-data).

I'm sure there are many different ways, but the general idea should be widely applicable. My own system kind of works well for the visual-shader graph, where some programmable evaluation will have to happen anyway. If you use just text-based shaders, you can make the matching happen based on eigther explicit flags, or by looking at the shaders input-signature. My own system btw so far is only implemented for DX11 and GL4, but the principle should work in DX12 as well (since as far as I understand, the process of compiling a sort of “renderable” is mandatory anyway.

This topic is closed to new replies.

Advertisement