• Advertisement
Sign in to follow this  

Input assembly vs buffers.

This topic is 762 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

 

I am wondering about something, is there a performance advantage in using the input assembly versus just fetching buffers (for position, texcoords, etc..) in the vertex shader using the vertexId/instanceId?

 

I get that the index buffer is required to use the post-transform cache but input assembly seems like a legacy feature supported because most engine already use it.

 

Cheers,

 

Shountz

Share this post


Link to post
Share on other sites
Advertisement

There was a discussion on Gamedev not too long ago about this very thing, not sure where it went to.  I think it was Hodgman who had a pretty in detailed post about different hardware.  The 'gist' of it is that it only matters on some hardware, if I remember correctly on AMD/NVidia it (for the most part) doesn't matter, but on Intel and some mobile chips using the IA had a definite advantage.

Share this post


Link to post
Share on other sites


seems like a legacy feature


It's still sort of useful even if the perf is the same. You can use the same vertex shader on multiple input layouts as long as the semantics match. If IA didn't exist, you'd need to have your own process of creating some kind of compiled preamble to your vertex shader to decode vertex data.

E.g. the same vertex shader can be used on 32 bit float positions or 16 bit float positions in an input layout, but you'd need two different shaders (or some flexibility) if you didn't use an input layout. 

Share this post


Link to post
Share on other sites

It's still sort of useful even if the perf is the same. You can use the same vertex shader on multiple input layouts as long as the semantics match. If IA didn't exist, you'd need to have your own process of creating some kind of compiled preamble to your vertex shader to decode vertex data.


E.g. the same vertex shader can be used on 32 bit float positions or 16 bit float positions in an input layout, but you'd need two different shaders (or some flexibility) if you didn't use an input layout. 

 

I don't think that works in D3D11. When you create an input layout you have to give it a shader blob, and if the input in your shader is not exactly the same as the example shader it will not work correctly.

Share this post


Link to post
Share on other sites

 

It's still sort of useful even if the perf is the same. You can use the same vertex shader on multiple input layouts as long as the semantics match. If IA didn't exist, you'd need to have your own process of creating some kind of compiled preamble to your vertex shader to decode vertex data.


E.g. the same vertex shader can be used on 32 bit float positions or 16 bit float positions in an input layout, but you'd need two different shaders (or some flexibility) if you didn't use an input layout. 

 

I don't think that works in D3D11. When you create an input layout you have to give it a shader blob, and if the input in your shader is not exactly the same as the example shader it will not work correctly.

 

You need to do that in DX12 technically too. My point is that the shader blob is actually the same every time you create a new input layout bound to the same vertex shader. If your vertex shader uses SV_VertexID to fetch things from buffers, the shader needs to know which buffer is bound to which slot, what the stride of the data is, the format of the data, etc. This information is all in the input layout and likely creates a small program that decodes the vertex data accordingly. 

 

You could create a system where, say, position data is always bound to slot 0 and is always 3 component floats or something and then your vertex shaders would work with every mesh you use because the engine enforces some sort of consistency, 

Edited by Dingleberry

Share this post


Link to post
Share on other sites

But it has to know the format and stride of the data even with an input layout,so the buffer slot is pretty much the only difference?

Share this post


Link to post
Share on other sites

I was responding to 


seems like a legacy feature

 

noting that input layouts still have uses aside from performance which is likely negligible on desktop GPUs. The input layout is gluing together a vertex shader and vertex data, allowing them to be decoupled. If you tightly couple them you don't need an input layout. 

Share this post


Link to post
Share on other sites

It's still sort of useful even if the perf is the same. You can use the same vertex shader on multiple input layouts as long as the semantics match. If IA didn't exist, you'd need to have your own process of creating some kind of compiled preamble to your vertex shader to decode vertex data.

E.g. the same vertex shader can be used on 32 bit float positions or 16 bit float positions in an input layout, but you'd need two different shaders (or some flexibility) if you didn't use an input layout.

 
I don't think that works in D3D11. When you create an input layout you have to give it a shader blob, and if the input in your shader is not exactly the same as the example shader it will not work correctly.

Sure it does. The IA Layout maps from the data structure in your buffers to the input structure of your vertex shader. You can have a lot of different IA Layout objects that each map a different buffer structure to the same vertex structure, and then they're all usable with any vertex shader that matches that structure.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement