• Advertisement

# DX11 Difference between two ways of assigning values to variables in HLSL constant buffer

This topic is 1802 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

## Recommended Posts

As far as I know, There are two way passing values to variables in HLSL constant buffer.


cbuffer cbGeometryRender : register( b0 )
{
float4x4 WorldMatrix;
float4x4 ViewMatrix;
float4x4 ProjMatrix;
float4x4 WVPMatrix;
float4x4 VPMatrix;
float3 EyePos;
float3 LightDir; // start at the sun
float4 SplitPos;
};


The first way is to use Effect11 Library. For example


ID3DX11Effect *pEffect; // initialize when compiling the fx file
ID3DX11EffectVectorVariable *pEyePos = pEffect->GetVariableByName("EyePos")->AsVector();
pEyePos->SetFloatVector(......);


The second way is to use the constant buffer created with the desc in the following.

 D3D11_BUFFER_DESC bd;
bd.Usage = D3D11_USAGE_DYNAMIC;
bd.ByteWidth = sizeof( CONSTANT_BUFFER_STRUCT );
bd.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
bd.MiscFlags = 0;
hr = pd3dDevice->CreateBuffer( &bd, NULL, &g_pCB );


On each frame, we map g_pCB to b0, assign the values, and then unmap it. After that, We set constant buffer for each stage.

IS Anyone who know what the differences are between these two methods?

which is better?

can Effect11 Library be used with deferred context?

Can these two ways be used together, i.e., using Effect Library compile the fx file and using the second way to pass the value to GPU?

Edited by wngabh11

#### Share this post

##### Share on other sites
Advertisement

The effects framework is just a wrapper on top of core D3D11 functionality. Ultimately it will create, map, and bind buffers just like you've described in your second way. If you step through the calls in your debugger or capture a frame in PIX you'll be able to see what it's doing. The main difference is that it might be more convenient than dealing with the buffer manually, although it may consume more resources and/or do things in a sub-optimal way.

#### Share this post

##### Share on other sites

Thanks for help.

I try to mix two way together. In each frame, I do things in the following order:

IASetInputLayout

IASetPrimitiveTopology

ID3DX11EffectTechnoque->GetPassByIndex(0)->Apply( 0, DeviceContext );

Map constant buffer

assign values

Umap constant buffer

VSSetConstantBuffer

PSSetConstantBuffer

IASetVertexBuffer

IASetIndexBuffer

DrawIndex

But If i render in the order above, I cannot pass the values correctly and thereby cannot render correct things.

Is the order wrong? Or I cannot mix the two ways?

#### Share this post

##### Share on other sites

The Effects11 source code is available and can be examined; you'll find it at e.g. C:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\Samples\C++\Effects11

Looking through how it handles cbuffer updates, the way it works is that it builds a chunk of memory representing the cbuffer data, then each Effect->Set call will root through that and copy your data to the appropriate place (setting a dirty flag while doing so), like so:

template<typename IBaseInterface, BOOL IsAnnotation>
HRESULT TIntScalarVariable<IBaseInterface, IsAnnotation>::SetFloat(float Value)
{
LPCSTR pFuncName = "ID3DX11EffectScalarVariable::SetFloat";
if (IsAnnotation) return AnnotationInvalidSetCall(pFuncName);
DirtyVariable();
return CopyScalarValue<ETVT_Float, ETVT_Int, float, FALSE>(Value, Data.pNumericInt, pFuncName);
}


Then, before draw calls are issued, it checks if the dirty flag is set and if so it uploads the entire buffer via UpdateSubresource, like so:

// Update constant buffer contents if necessary
D3DX11INLINE void CheckAndUpdateCB_FX(ID3D11DeviceContext *pContext, SConstantBuffer *pCB)
{
if (pCB->IsDirty && !pCB->IsNonUpdatable)
{
// CB out of date; rebuild it
pContext->UpdateSubresource(pCB->pD3DObject, 0, NULL, pCB->pBackingStore, pCB->Size, pCB->Size);
pCB->IsDirty = FALSE;
}
}


You can probably do a little better than that yourself (you can also do much worse if you don't manage your cbuffers properly, of course).

It's probably not a good idea to mix the two ways as all it takes is one dirty flag (i.e. one cbuffer variable set via Effects) to cause the entire cbuffer to be marked dirty, which will stomp over anything you set yourself.

#### Share this post

##### Share on other sites

• Advertisement
• Advertisement
• ### Popular Now

• 10
• 11
• 9
• 16
• 18
• Advertisement
• ### Similar Content

• I wanted to see how others are currently handling descriptor heap updates and management.
I've read a few articles and there tends to be three major strategies :
1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
2) You have one descriptor heap for an entire pipeline
3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.

• hi,
until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight.
Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap.
For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7
My Graphic Card is Directx 12 compatible NVidia GTX 960
the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it )
https://turanszkij.wordpress.com/2017/09/09/skinning-in-compute-shader/
Now my questions
is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ?
the same question is about the constant buffer of the matrixes
my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal
for example i could use 2 vertexbuffer bindings
1 containing only the uv coordinates
2.containing position and normal
How do i copy from the RWByteAddressBuffers to the vertexbuffer ?

(Code from turanszkij )
Here is my shader implementation for skinning a mesh in a compute shader:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 struct Bone { float4x4 pose; }; StructuredBuffer<Bone> boneBuffer;   ByteAddressBuffer vertexBuffer_POS; // T-Pose pos ByteAddressBuffer vertexBuffer_NOR; // T-Pose normal ByteAddressBuffer vertexBuffer_WEI; // bone weights ByteAddressBuffer vertexBuffer_BON; // bone indices   RWByteAddressBuffer streamoutBuffer_POS; // skinned pos RWByteAddressBuffer streamoutBuffer_NOR; // skinned normal RWByteAddressBuffer streamoutBuffer_PRE; // previous frame skinned pos   inline void Skinning(inout float4 pos, inout float4 nor, in float4 inBon, in float4 inWei) {  float4 p = 0, pp = 0;  float3 n = 0;  float4x4 m;  float3x3 m3;  float weisum = 0;   // force loop to reduce register pressure  // though this way we can not interleave TEX - ALU operations  [loop]  for (uint i = 0; ((i &lt; 4) &amp;&amp; (weisum&lt;1.0f)); ++i)  {  m = boneBuffer[(uint)inBon].pose;  m3 = (float3x3)m;   p += mul(float4(pos.xyz, 1), m)*inWei;  n += mul(nor.xyz, m3)*inWei;   weisum += inWei;  }   bool w = any(inWei);  pos.xyz = w ? p.xyz : pos.xyz;  nor.xyz = w ? n : nor.xyz; }   [numthreads(1024, 1, 1)] void main( uint3 DTid : SV_DispatchThreadID ) {  const uint fetchAddress = DTid.x * 16; // stride is 16 bytes for each vertex buffer now...   uint4 pos_u = vertexBuffer_POS.Load4(fetchAddress);  uint4 nor_u = vertexBuffer_NOR.Load4(fetchAddress);  uint4 wei_u = vertexBuffer_WEI.Load4(fetchAddress);  uint4 bon_u = vertexBuffer_BON.Load4(fetchAddress);   float4 pos = asfloat(pos_u);  float4 nor = asfloat(nor_u);  float4 wei = asfloat(wei_u);  float4 bon = asfloat(bon_u);   Skinning(pos, nor, bon, wei);   pos_u = asuint(pos);  nor_u = asuint(nor);   // copy prev frame current pos to current frame prev pos streamoutBuffer_PRE.Store4(fetchAddress, streamoutBuffer_POS.Load4(fetchAddress)); // write out skinned props:  streamoutBuffer_POS.Store4(fetchAddress, pos_u);  streamoutBuffer_NOR.Store4(fetchAddress, nor_u); }

• Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?

_lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up));
It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp
and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.
https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp

• Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
double clicked on the frame to open it, but no idea where to go from there.

I've been searching for hours and there's no information on this, not even on the Microsoft Website!
They say "open the  Graphics Pixel History window" but there is no such window!
Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?

All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
and Microsoft's instructions are horrible! Somebody please, please help.

• I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.
https://github.com/mister51213/DX11Port_SoftShadows

Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.
https://github.com/mister51213/DX11Port_ShadowMapping

• Advertisement