Jump to content

  • Log In with Google      Sign In   
  • Create Account


Shader reflection cbuffer register?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 Juliean   GDNet+   -  Reputation: 2246

Like
0Likes
Like

Posted 17 June 2013 - 12:30 PM

Hello,

 

after finally driving myself to getting started with DX11 (it isn't as hard switching from DX9 as I thought), I've already come across the shader reflection interface. This has already taken a lot of tedious work from me (figuring/hardcoding the input layout, size of cbuffers), however there is one feature I'm not sure if it is missing or I've just been overlooking it. Is it possible to figure out the b-register of a cbuffer? I'm using this code to get the cbuffer-description:

void Effect::CreateCBufferDummies(ID3D11ShaderReflection& reflection)
{
	// Get shader info
	D3D11_SHADER_DESC shaderDesc;
	reflection.GetDesc( &shaderDesc );

	for(size_t i = 0; i < shaderDesc.ConstantBuffers; i++)
	{
		ID3D11ShaderReflectionConstantBuffer* pConstantReflection = reflection.GetConstantBufferByIndex(i);

		D3D11_SHADER_BUFFER_DESC desc;
		pConstantReflection->GetDesc(&desc);

		m_cBuffer[i] = new d3d::ConstantBuffer(*m_pDevice, desc.Size);
	}
}

However, the D3D11_SHADER_BUFFER_DESC doesn't seem to have any register-variable or anything similar, which means that if I have this code in my shader

cbuffer instance : register(b1)
{
	matrix cWorld;
}

cbuffer stage : register(b4)
{
	matrix cViewProj;
}

those c-buffers would be written to the first two "slots" in my effect class, though what I intend is to have a fixed amount of possible cbuffers (effect, material, instance, pass, stage), which should always be at the same positio, determined by the register. Is there any such possiblity with shader reflection?


Edited by Juliean, 17 June 2013 - 12:55 PM.


Sponsor:

#2 Jason Z   Crossbones+   -  Reputation: 4720

Like
2Likes
Like

Posted 17 June 2013 - 04:30 PM

I don't recall it off the top of my head, but I wrote up a short article on reflection a while back (2009!): http://members.gamedev.net/JasonZ/Heiroglyph/D3D11ShaderReflection.pdf

 

You can also check out the Hieroglyph 3 source code, as I certainly check the register value in some way in my reflection code - all of my shader resource information is loaded at startup, so the register info has to be there somewhere.  Start out with the RendererDX11::LoadShader method, and you can work backward from there to see how the cbuffers are indexed.



#3 osmanb   Crossbones+   -  Reputation: 1462

Like
3Likes
Like

Posted 17 June 2013 - 07:04 PM

You need to use GetResourceBindingDesc (or GetResourceBindingDescByName) to get the information about how the cbuffer is bound to the shader. That gives you a D3D11_SHADER_INPUT_BIND_DESC, and that contains a BindPoint, which is the buffer register number.



#4 Juliean   GDNet+   -  Reputation: 2246

Like
0Likes
Like

Posted 18 June 2013 - 01:30 AM


I don't recall it off the top of my head, but I wrote up a short article on reflection a while back (2009!): http://members.gamedev.net/JasonZ/Heiroglyph/D3D11ShaderReflection.pdf

 

Thanks, your article was one of the first, but unfortunately few I read concerning this topic. The other one fortunately tackled just what I needed the most (automatic input layout), but there seems to be very little resources apart from that...

 

 

 


You need to use GetResourceBindingDesc (or GetResourceBindingDescByName) to get the information about how the cbuffer is bound to the shader. That gives you a D3D11_SHADER_INPUT_BIND_DESC, and that contains a BindPoint, which is the buffer register number.

 

Thanks, thats it! I was already starting to dig into the Hieroglyph 3 source code, but a direct solution is always faster biggrin.png

 

While we are at it, and as I mentioned I automated the input layout generation: Any chance to mark/read out the input channel, and/or instancing data in the shader? Say like if I was to split my meshes into different buffers for position, normal, texcoords, can I somehow read that out from the shader reflection too, or do I have to make a custom naming convention for this myself?

			void Effect::CreateInputLayout(ID3D11ShaderReflection& reflection, ID3DBlob& shaderBlob)
			{
				// Get shader info
				D3D11_SHADER_DESC shaderDesc;
				reflection.GetDesc( &shaderDesc );
     
				// Read input layout description from shader info
				int byteOffset = 0;
				std::vector<D3D11_INPUT_ELEMENT_DESC> inputLayoutDesc;
				for ( size_t i=0; i< shaderDesc.InputParameters; i++ )
				{
					D3D11_SIGNATURE_PARAMETER_DESC paramDesc;      
					reflection.GetInputParameterDesc(i, &paramDesc );
 
					// fill out input element desc
					D3D11_INPUT_ELEMENT_DESC elementDesc;  
					elementDesc.SemanticName = paramDesc.SemanticName;     
					elementDesc.SemanticIndex = paramDesc.SemanticIndex;
					elementDesc.InputSlot = 0; //???
					elementDesc.AlignedByteOffset = byteOffset;
					elementDesc.InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA; //???
					elementDesc.InstanceDataStepRate = 0;  //???
 
					// determine DXGI format
					if ( paramDesc.Mask == 1 )
					{
						if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32_UINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32_SINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32_FLOAT;
						byteOffset += 4;
					}
					else if ( paramDesc.Mask <= 3 )
					{
						if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_UINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_SINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_FLOAT;
						byteOffset += 8;
					}
					else if ( paramDesc.Mask <= 7 )
					{
						if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_UINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_SINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT;
						byteOffset += 12;
					}
					else if ( paramDesc.Mask <= 15 )
					{
						if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_UINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_SINT;
						else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
						byteOffset += 16;
					}
             
					//save element desc
					inputLayoutDesc.push_back(elementDesc);
				}      
 
				// Try to create Input Layout
				m_pLayout = new d3d::InputLayout(*m_pDevice, &inputLayoutDesc[0], inputLayoutDesc.size(), shaderBlob);
			}

I'll have a look again at Hieroglyph 3 again when I'm home, but if someone has another answer in the meantime, I'd be glad too biggrin.png


Edited by Juliean, 18 June 2013 - 02:19 AM.


#5 Jason Z   Crossbones+   -  Reputation: 4720

Like
0Likes
Like

Posted 18 June 2013 - 04:52 AM


While we are at it, and as I mentioned I automated the input layout generation: Any chance to mark/read out the input channel, and/or instancing data in the shader? Say like if I was to split my meshes into different buffers for position, normal, texcoords, can I somehow read that out from the shader reflection too, or do I have to make a custom naming convention for this myself?
Unfortunately, no.  Since the input assembler puts together all of the vertices prior to them reaching the vertex shader, there isn't any info available there.  You can check the vertex elements and look for some of the system values, but those are really only hints since they don't tell you which other elements are part of instance data or which buffer they came from.  Basically your engine just has to ensure that you have an input assembler state (i.e. vertex buffers, index buffers, input layout, and topology) that when it is put together with a particular draw call that it will assemble the correct vertex layout that your shader is looking for.

#6 Juliean   GDNet+   -  Reputation: 2246

Like
0Likes
Like

Posted 18 June 2013 - 12:55 PM


Unfortunately, no. Since the input assembler puts together all of the vertices prior to them reaching the vertex shader, there isn't any info available there. You can check the vertex elements and look for some of the system values, but those are really only hints since they don't tell you which other elements are part of instance data or which buffer they came from. Basically your engine just has to ensure that you have an input assembler state (i.e. vertex buffers, index buffers, input layout, and topology) that when it is put together with a particular draw call that it will assemble the correct vertex layout that your shader is looking for.

 

Whats the point of having to have a shader blob for creating an input layout then? I mean, thats one of the things that kinda/sorta annoyes me a bit about dx11, so I'm glad I found that code to at least take away some of the work from me. I mean, if everything "magically" has to match up anyway, why have the shader blob be mandatory for creating that input layout? So whats the normal procedure for this, do I really need to have some sort of reference shader for every type of primitive (terrain, mesh, ...) I create?



#7 Jason Z   Crossbones+   -  Reputation: 4720

Like
1Likes
Like

Posted 18 June 2013 - 08:04 PM

Actually, I think it makes sense to be designed the way that it is.  If you think about the vertex shader blob as defining the input signature for the pipeline, and the input layout as defining the output layout for the input assembler, then requiring the blob to create the input layout lets you validate that they match.  You can choose whatever (valid) signature for your vertex shader that you want, and then you can use that shader with any input layout combination that ends up producing that signature.

 

So in effect, you can have a single vertex buffer, you can have a vertex and an instance buffer, and both can work with the same vertex shader - you just need to create a separate input layout object for each configuration (which you would have to do anyways for the input layout to work).  So it seems perfectly logical to me - do you see it differently?

 

As far as making this work in an engine, in Hieroglyph 3 I just keep a copy of the blob around with the vertex shader class.  Then when I go to bind the input assembler state, my geometry class keeps a map of vertex shaders (as the key) to input layouts (as the value).  If the input layout is already created, then I just use it.  Otherwise, I generate one and then store it for next time.  It seems like a PITA at first, but once you get a wrapper around the functionality then it isn't too much concern anymore.

 

One hint from my experiences with that system - you have to be careful when using multithreaded rendering (i.e. deferred contexts on multiple threads) in such a system, since you can end up with multiple threads writing to the map at the same time.  Either mutex them, or create the input layouts serially rather than in parallel!



#8 Juliean   GDNet+   -  Reputation: 2246

Like
0Likes
Like

Posted 19 June 2013 - 03:40 AM


So in effect, you can have a single vertex buffer, you can have a vertex and an instance buffer, and both can work with the same vertex shader - you just need to create a separate input layout object for each configuration (which you would have to do anyways for the input layout to work). So it seems perfectly logical to me - do you see it differently?

 

Aside from that I can't really see how a shader would work for both normal and instanced meshes (I suppose that was just an example, or is there really a reasonable type of shader you can use both instanced and non-instanced? I don't see it very useful to store the WVP matrix in the vertex buffer for non-instanced meshes e.g...), it does make sense indeed that way, but not so much that you have to validate against the vertex shader you are targenting. I mean, you set up all the variables yourself anyway, so I see it a bit tedious that they force you to validate. I mean, maybe I don't need validation, maybe I trust myself enough to choose the right combination anyway, its surely nice to have the option, but why enforce this? Unless there is some serious gain in eigther performance or usability through simply checking whether the shaders fit on creation time, I unforunately fail to see the point (what happens if I bind the wrong layout to the wrong shader anyway? crash, error, or jus undefined behaviour)?

 


As far as making this work in an engine, in Hieroglyph 3 I just keep a copy of the blob around with the vertex shader class. Then when I go to bind the input assembler state, my geometry class keeps a map of vertex shaders (as the key) to input layouts (as the value). If the input layout is already created, then I just use it. Otherwise, I generate one and then store it for next time. It seems like a PITA at first, but once you get a wrapper around the functionality then it isn't too much concern anymore.

 

So that effectively means that you have to bind the shader before the input layout, and keep the currently active vertex shader around somewhere? Hm, I quess that could work, I just have to ensure that the BindShader-command is always set before the input layout command in my render queue... but for now, to get things running again (I want to plug the new render and gfx api into my latest game without much hassle as primary goal) I think I'll just use a small naming convention in the shader for which buffer I suppose the input value should come from ("_v1" for vertex stream 1 e.g.), with some manual parsing on effect generation using the shader reflection. Once I've got things running again, I'll track back and optimize things based on what you suggested, thanks for that!

 


One hint from my experiences with that system - you have to be careful when using multithreaded rendering (i.e. deferred contexts on multiple threads) in such a system, since you can end up with multiple threads writing to the map at the same time. Either mutex them, or create the input layouts serially rather than in parallel!

 

Thanks, I'll keep that in mind too. Multithreaded rendering was one of the things I'm keen on trying out after I've got things running, I think a system of multithreaded instance assignement to the render queues, in combination with a deferred context for actually rendering those instances in the queue could aid quite a bit of performance.



#9 Jason Z   Crossbones+   -  Reputation: 4720

Like
0Likes
Like

Posted 19 June 2013 - 04:56 AM


Aside from that I can't really see how a shader would work for both normal and instanced meshes (I suppose that was just an example, or is there really a reasonable type of shader you can use both instanced and non-instanced? I don't see it very useful to store the WVP matrix in the vertex buffer for non-instanced meshes e.g...), it does make sense indeed that way, but not so much that you have to validate against the vertex shader you are targenting. I mean, you set up all the variables yourself anyway, so I see it a bit tedious that they force you to validate. I mean, maybe I don't need validation, maybe I trust myself enough to choose the right combination anyway, its surely nice to have the option, but why enforce this? Unless there is some serious gain in eigther performance or usability through simply checking whether the shaders fit on creation time, I unforunately fail to see the point (what happens if I bind the wrong layout to the wrong shader anyway? crash, error, or jus undefined behaviour)?
My example was contrived, I give you that :)  But by doing it this way, the validation is done at creation time.  The previous generation (i.e. Pre-DX10) API had to do that checking at runtime when you bound the data.  Due to the large number of ways you can configure the pipeline to match that vertex shader input signature, that was lots of work to do - for every new pipeline configuration!  In addition, the implementations were also not very strict about their validation routines.  Some drivers would allow more flexibility when they shouldn't have, causing inconsistent behavior across GPU vendors and the reference device.  I think it was a good move to make the validation part of the API itself (reduced variation across implementations) and move the validation to a one time at creation (for performance).  Once the input layout is created, I'm sure they have some simple way to validate that it works with a given vertex shader at runtime.

 

I'm not sure, but I think if you mismatch you get an error in the debug console and the draw call doesn't execute.

 

Regarding the implementation, I just have a series of 'state' objects that I use to represent my current pipeline state.  Then the object that is setting the input configuration can grab a reference to the vertex shader and check its input layouts.  It works pretty good, and I haven't found any situations where I really wanted something else (yet!).






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS