• Advertisement
Sign in to follow this  

DX11 Shader reflection cbuffer register?

This topic is 1675 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

after finally driving myself to getting started with DX11 (it isn't as hard switching from DX9 as I thought), I've already come across the shader reflection interface. This has already taken a lot of tedious work from me (figuring/hardcoding the input layout, size of cbuffers), however there is one feature I'm not sure if it is missing or I've just been overlooking it. Is it possible to figure out the b-register of a cbuffer? I'm using this code to get the cbuffer-description:

void Effect::CreateCBufferDummies(ID3D11ShaderReflection& reflection)
{
	// Get shader info
	D3D11_SHADER_DESC shaderDesc;
	reflection.GetDesc( &shaderDesc );

	for(size_t i = 0; i < shaderDesc.ConstantBuffers; i++)
	{
		ID3D11ShaderReflectionConstantBuffer* pConstantReflection = reflection.GetConstantBufferByIndex(i);

		D3D11_SHADER_BUFFER_DESC desc;
		pConstantReflection->GetDesc(&desc);

		m_cBuffer[i] = new d3d::ConstantBuffer(*m_pDevice, desc.Size);
	}
}

However, the D3D11_SHADER_BUFFER_DESC doesn't seem to have any register-variable or anything similar, which means that if I have this code in my shader

cbuffer instance : register(b1)
{
	matrix cWorld;
}

cbuffer stage : register(b4)
{
	matrix cViewProj;
}

those c-buffers would be written to the first two "slots" in my effect class, though what I intend is to have a fixed amount of possible cbuffers (effect, material, instance, pass, stage), which should always be at the same positio, determined by the register. Is there any such possiblity with shader reflection?

Edited by Juliean

Share this post


Link to post
Share on other sites
Advertisement

I don't recall it off the top of my head, but I wrote up a short article on reflection a while back (2009!): http://members.gamedev.net/JasonZ/Heiroglyph/D3D11ShaderReflection.pdf

 

You can also check out the Hieroglyph 3 source code, as I certainly check the register value in some way in my reflection code - all of my shader resource information is loaded at startup, so the register info has to be there somewhere.  Start out with the RendererDX11::LoadShader method, and you can work backward from there to see how the cbuffers are indexed.

Share this post


Link to post
Share on other sites

You need to use GetResourceBindingDesc (or GetResourceBindingDescByName) to get the information about how the cbuffer is bound to the shader. That gives you a D3D11_SHADER_INPUT_BIND_DESC, and that contains a BindPoint, which is the buffer register number.

Share this post


Link to post
Share on other sites

I don't recall it off the top of my head, but I wrote up a short article on reflection a while back (2009!): http://members.gamedev.net/JasonZ/Heiroglyph/D3D11ShaderReflection.pdf

 

Thanks, your article was one of the first, but unfortunately few I read concerning this topic. The other one fortunately tackled just what I needed the most (automatic input layout), but there seems to be very little resources apart from that...

 

 

 


You need to use GetResourceBindingDesc (or GetResourceBindingDescByName) to get the information about how the cbuffer is bound to the shader. That gives you a D3D11_SHADER_INPUT_BIND_DESC, and that contains a BindPoint, which is the buffer register number.

 

Thanks, thats it! I was already starting to dig into the Hieroglyph 3 source code, but a direct solution is always faster biggrin.png

 

While we are at it, and as I mentioned I automated the input layout generation: Any chance to mark/read out the input channel, and/or instancing data in the shader? Say like if I was to split my meshes into different buffers for position, normal, texcoords, can I somehow read that out from the shader reflection too, or do I have to make a custom naming convention for this myself?

			void Effect::CreateInputLayout(ID3D11ShaderReflection& reflection, ID3DBlob& shaderBlob)
			{
				// Get shader info
				D3D11_SHADER_DESC shaderDesc;
				reflection.GetDesc( &shaderDesc );
     
				// Read input layout description from shader info
				int byteOffset = 0;
				std::vector<D3D11_INPUT_ELEMENT_DESC> inputLayoutDesc;
				for ( size_t i=0; i< shaderDesc.InputParameters; i++ )
				{
					D3D11_SIGNATURE_PARAMETER_DESC paramDesc;      
					reflection.GetInputParameterDesc(i, &paramDesc );
 
					// fill out input element desc
					D3D11_INPUT_ELEMENT_DESC elementDesc;  
					elementDesc.SemanticName = paramDesc.SemanticName;     
					elementDesc.SemanticIndex = paramDesc.SemanticIndex;
					elementDesc.InputSlot = 0; //???
					elementDesc.AlignedByteOffset = byteOffset;
					elementDesc.InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA; //???
					elementDesc.InstanceDataStepRate = 0;  //???
 
					// determine DXGI format
					if ( paramDesc.Mask == 1 )
					{
						if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32_UINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32_SINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32_FLOAT;
						byteOffset += 4;
					}
					else if ( paramDesc.Mask <= 3 )
					{
						if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_UINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_SINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_FLOAT;
						byteOffset += 8;
					}
					else if ( paramDesc.Mask <= 7 )
					{
						if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_UINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_SINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT;
						byteOffset += 12;
					}
					else if ( paramDesc.Mask <= 15 )
					{
						if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_UINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_SINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
						byteOffset += 16;
					}
             
					//save element desc
					inputLayoutDesc.push_back(elementDesc);
				}      
 
				// Try to create Input Layout
				m_pLayout = new d3d::InputLayout(*m_pDevice, &inputLayoutDesc[0], inputLayoutDesc.size(), shaderBlob);
			}

I'll have a look again at Hieroglyph 3 again when I'm home, but if someone has another answer in the meantime, I'd be glad too biggrin.png

Edited by Juliean

Share this post


Link to post
Share on other sites


While we are at it, and as I mentioned I automated the input layout generation: Any chance to mark/read out the input channel, and/or instancing data in the shader? Say like if I was to split my meshes into different buffers for position, normal, texcoords, can I somehow read that out from the shader reflection too, or do I have to make a custom naming convention for this myself?
Unfortunately, no.  Since the input assembler puts together all of the vertices prior to them reaching the vertex shader, there isn't any info available there.  You can check the vertex elements and look for some of the system values, but those are really only hints since they don't tell you which other elements are part of instance data or which buffer they came from.  Basically your engine just has to ensure that you have an input assembler state (i.e. vertex buffers, index buffers, input layout, and topology) that when it is put together with a particular draw call that it will assemble the correct vertex layout that your shader is looking for.

Share this post


Link to post
Share on other sites


Unfortunately, no. Since the input assembler puts together all of the vertices prior to them reaching the vertex shader, there isn't any info available there. You can check the vertex elements and look for some of the system values, but those are really only hints since they don't tell you which other elements are part of instance data or which buffer they came from. Basically your engine just has to ensure that you have an input assembler state (i.e. vertex buffers, index buffers, input layout, and topology) that when it is put together with a particular draw call that it will assemble the correct vertex layout that your shader is looking for.

 

Whats the point of having to have a shader blob for creating an input layout then? I mean, thats one of the things that kinda/sorta annoyes me a bit about dx11, so I'm glad I found that code to at least take away some of the work from me. I mean, if everything "magically" has to match up anyway, why have the shader blob be mandatory for creating that input layout? So whats the normal procedure for this, do I really need to have some sort of reference shader for every type of primitive (terrain, mesh, ...) I create?

Share this post


Link to post
Share on other sites

Actually, I think it makes sense to be designed the way that it is.  If you think about the vertex shader blob as defining the input signature for the pipeline, and the input layout as defining the output layout for the input assembler, then requiring the blob to create the input layout lets you validate that they match.  You can choose whatever (valid) signature for your vertex shader that you want, and then you can use that shader with any input layout combination that ends up producing that signature.

 

So in effect, you can have a single vertex buffer, you can have a vertex and an instance buffer, and both can work with the same vertex shader - you just need to create a separate input layout object for each configuration (which you would have to do anyways for the input layout to work).  So it seems perfectly logical to me - do you see it differently?

 

As far as making this work in an engine, in Hieroglyph 3 I just keep a copy of the blob around with the vertex shader class.  Then when I go to bind the input assembler state, my geometry class keeps a map of vertex shaders (as the key) to input layouts (as the value).  If the input layout is already created, then I just use it.  Otherwise, I generate one and then store it for next time.  It seems like a PITA at first, but once you get a wrapper around the functionality then it isn't too much concern anymore.

 

One hint from my experiences with that system - you have to be careful when using multithreaded rendering (i.e. deferred contexts on multiple threads) in such a system, since you can end up with multiple threads writing to the map at the same time.  Either mutex them, or create the input layouts serially rather than in parallel!

Share this post


Link to post
Share on other sites


So in effect, you can have a single vertex buffer, you can have a vertex and an instance buffer, and both can work with the same vertex shader - you just need to create a separate input layout object for each configuration (which you would have to do anyways for the input layout to work). So it seems perfectly logical to me - do you see it differently?

 

Aside from that I can't really see how a shader would work for both normal and instanced meshes (I suppose that was just an example, or is there really a reasonable type of shader you can use both instanced and non-instanced? I don't see it very useful to store the WVP matrix in the vertex buffer for non-instanced meshes e.g...), it does make sense indeed that way, but not so much that you have to validate against the vertex shader you are targenting. I mean, you set up all the variables yourself anyway, so I see it a bit tedious that they force you to validate. I mean, maybe I don't need validation, maybe I trust myself enough to choose the right combination anyway, its surely nice to have the option, but why enforce this? Unless there is some serious gain in eigther performance or usability through simply checking whether the shaders fit on creation time, I unforunately fail to see the point (what happens if I bind the wrong layout to the wrong shader anyway? crash, error, or jus undefined behaviour)?

 


As far as making this work in an engine, in Hieroglyph 3 I just keep a copy of the blob around with the vertex shader class. Then when I go to bind the input assembler state, my geometry class keeps a map of vertex shaders (as the key) to input layouts (as the value). If the input layout is already created, then I just use it. Otherwise, I generate one and then store it for next time. It seems like a PITA at first, but once you get a wrapper around the functionality then it isn't too much concern anymore.

 

So that effectively means that you have to bind the shader before the input layout, and keep the currently active vertex shader around somewhere? Hm, I quess that could work, I just have to ensure that the BindShader-command is always set before the input layout command in my render queue... but for now, to get things running again (I want to plug the new render and gfx api into my latest game without much hassle as primary goal) I think I'll just use a small naming convention in the shader for which buffer I suppose the input value should come from ("_v1" for vertex stream 1 e.g.), with some manual parsing on effect generation using the shader reflection. Once I've got things running again, I'll track back and optimize things based on what you suggested, thanks for that!

 


One hint from my experiences with that system - you have to be careful when using multithreaded rendering (i.e. deferred contexts on multiple threads) in such a system, since you can end up with multiple threads writing to the map at the same time. Either mutex them, or create the input layouts serially rather than in parallel!

 

Thanks, I'll keep that in mind too. Multithreaded rendering was one of the things I'm keen on trying out after I've got things running, I think a system of multithreaded instance assignement to the render queues, in combination with a deferred context for actually rendering those instances in the queue could aid quite a bit of performance.

Share this post


Link to post
Share on other sites


Aside from that I can't really see how a shader would work for both normal and instanced meshes (I suppose that was just an example, or is there really a reasonable type of shader you can use both instanced and non-instanced? I don't see it very useful to store the WVP matrix in the vertex buffer for non-instanced meshes e.g...), it does make sense indeed that way, but not so much that you have to validate against the vertex shader you are targenting. I mean, you set up all the variables yourself anyway, so I see it a bit tedious that they force you to validate. I mean, maybe I don't need validation, maybe I trust myself enough to choose the right combination anyway, its surely nice to have the option, but why enforce this? Unless there is some serious gain in eigther performance or usability through simply checking whether the shaders fit on creation time, I unforunately fail to see the point (what happens if I bind the wrong layout to the wrong shader anyway? crash, error, or jus undefined behaviour)?
My example was contrived, I give you that :)  But by doing it this way, the validation is done at creation time.  The previous generation (i.e. Pre-DX10) API had to do that checking at runtime when you bound the data.  Due to the large number of ways you can configure the pipeline to match that vertex shader input signature, that was lots of work to do - for every new pipeline configuration!  In addition, the implementations were also not very strict about their validation routines.  Some drivers would allow more flexibility when they shouldn't have, causing inconsistent behavior across GPU vendors and the reference device.  I think it was a good move to make the validation part of the API itself (reduced variation across implementations) and move the validation to a one time at creation (for performance).  Once the input layout is created, I'm sure they have some simple way to validate that it works with a given vertex shader at runtime.

 

I'm not sure, but I think if you mismatch you get an error in the debug console and the draw call doesn't execute.

 

Regarding the implementation, I just have a series of 'state' objects that I use to represent my current pipeline state.  Then the object that is setting the input configuration can grab a reference to the vertex shader and check its input layouts.  It works pretty good, and I haven't found any situations where I really wanted something else (yet!).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By AxeGuywithanAxe
      I wanted to see how others are currently handling descriptor heap updates and management.
      I've read a few articles and there tends to be three major strategies :
      1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
      2) You have one descriptor heap for an entire pipeline
      3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
      The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
      The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.
    • By evelyn4you
      hi,
      until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight.
      Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap.
      For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7
      My Graphic Card is Directx 12 compatible NVidia GTX 960
      the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it )
      https://turanszkij.wordpress.com/2017/09/09/skinning-in-compute-shader/
      Now my questions
       is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ?
        the same question is about the constant buffer of the matrixes
       my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal
      for example i could use 2 vertexbuffer bindings
      1 containing only the uv coordinates
      2.containing position and normal
      How do i copy from the RWByteAddressBuffers to the vertexbuffer ?
       
      (Code from turanszkij )
      Here is my shader implementation for skinning a mesh in a compute shader:
      1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 struct Bone { float4x4 pose; }; StructuredBuffer<Bone> boneBuffer;   ByteAddressBuffer vertexBuffer_POS; // T-Pose pos ByteAddressBuffer vertexBuffer_NOR; // T-Pose normal ByteAddressBuffer vertexBuffer_WEI; // bone weights ByteAddressBuffer vertexBuffer_BON; // bone indices   RWByteAddressBuffer streamoutBuffer_POS; // skinned pos RWByteAddressBuffer streamoutBuffer_NOR; // skinned normal RWByteAddressBuffer streamoutBuffer_PRE; // previous frame skinned pos   inline void Skinning(inout float4 pos, inout float4 nor, in float4 inBon, in float4 inWei) {  float4 p = 0, pp = 0;  float3 n = 0;  float4x4 m;  float3x3 m3;  float weisum = 0;   // force loop to reduce register pressure  // though this way we can not interleave TEX - ALU operations  [loop]  for (uint i = 0; ((i &lt; 4) &amp;&amp; (weisum&lt;1.0f)); ++i)  {  m = boneBuffer[(uint)inBon].pose;  m3 = (float3x3)m;   p += mul(float4(pos.xyz, 1), m)*inWei;  n += mul(nor.xyz, m3)*inWei;   weisum += inWei;  }   bool w = any(inWei);  pos.xyz = w ? p.xyz : pos.xyz;  nor.xyz = w ? n : nor.xyz; }   [numthreads(1024, 1, 1)] void main( uint3 DTid : SV_DispatchThreadID ) {  const uint fetchAddress = DTid.x * 16; // stride is 16 bytes for each vertex buffer now...   uint4 pos_u = vertexBuffer_POS.Load4(fetchAddress);  uint4 nor_u = vertexBuffer_NOR.Load4(fetchAddress);  uint4 wei_u = vertexBuffer_WEI.Load4(fetchAddress);  uint4 bon_u = vertexBuffer_BON.Load4(fetchAddress);   float4 pos = asfloat(pos_u);  float4 nor = asfloat(nor_u);  float4 wei = asfloat(wei_u);  float4 bon = asfloat(bon_u);   Skinning(pos, nor, bon, wei);   pos_u = asuint(pos);  nor_u = asuint(nor);   // copy prev frame current pos to current frame prev pos streamoutBuffer_PRE.Store4(fetchAddress, streamoutBuffer_POS.Load4(fetchAddress)); // write out skinned props:  streamoutBuffer_POS.Store4(fetchAddress, pos_u);  streamoutBuffer_NOR.Store4(fetchAddress, nor_u); }  
    • By mister345
      Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?
       
      _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up));
      It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
          m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp
      and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.
      https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp
    • By mister345
      Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
      I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
      double clicked on the frame to open it, but no idea where to go from there.
       
      I've been searching for hours and there's no information on this, not even on the Microsoft Website!
      They say "open the  Graphics Pixel History window" but there is no such window!
      Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
      Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?
       
      All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
      and Microsoft's instructions are horrible! Somebody please, please help.
       
       
       

    • By mister345
      I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
      Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.
      https://github.com/mister51213/DX11Port_SoftShadows
       
      Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.
      https://github.com/mister51213/DX11Port_ShadowMapping
       
  • Advertisement