Sign in to follow this  
clashie

DX11 Shader semantics

Recommended Posts

I want to use a matrix for my instance position and stuff instead of just a vector, but I don't see a semantic for anything more than float4. I'm using a second vertex buffer for my instance data.

Looking here, I see nothing for a matrix / float4x4.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb509647

This, however, makes it look like I can do it, but I have to have 4 seperate parts for the input layout, then I can just simply use it as a float4x4 in the shader? Is that right?
http://www.gamedev.net/topic/608857-dx11-how-to-transfer-matrix-to-vs/

But what is WORLDVIEW? Why isn't it on the MSDN list?

Also, should I be striving to pad my structures out to a specific size?

Share this post


Link to post
Share on other sites
Which is one of the problems, why doesn't MSDN have the complete list of these things? It doesn't even mention that WORLDVIEW exists. :/

Ok, before I just used a single vector for my instance position, but I want to be able to rotate, scale, etc, so I figured I'd just make use a matrix instead.

So before, I just did something like this:
layout[2].SemanticName		= "TEXCOORD";
layout[2].SemanticIndex		= 0;
layout[2].Format		= DXGI_FORMAT_R32G32B32A32_FLOAT;
layout[2].InputSlot		= 1;
layout[2].AlignedByteOffset	= 0;
layout[2].InputSlotClass	= D3D11_INPUT_PER_INSTANCE_DATA;
layout[2].InstanceDataStepRate	= 1;

...

VOut vs(float4 pos : POSITION, float4 col : COLOR, float4 ins : TEXCOORD0)
{
	VOut output;
	
	pos.x += ins.x;
	pos.y += ins.y;
...

Ok, bam, works.

But I'm not sure what I'm doing wrong here trying to access my matrix. I changed my input layout to this:
layout[2].SemanticName		= "TEXCOORD";
layout[2].SemanticIndex		= 0;
layout[2].Format		= DXGI_FORMAT_R32G32B32A32_FLOAT;
layout[2].InputSlot		= 1;
layout[2].AlignedByteOffset	= 0;
layout[2].InputSlotClass	= D3D11_INPUT_PER_INSTANCE_DATA;
layout[2].InstanceDataStepRate	= 1;

layout[3].SemanticName		= "TEXCOORD";
layout[3].SemanticIndex		= 1;
layout[3].Format		= DXGI_FORMAT_R32G32B32A32_FLOAT;
layout[3].InputSlot		= 1;
layout[3].AlignedByteOffset	= D3D11_APPEND_ALIGNED_ELEMENT;
layout[3].InputSlotClass	= D3D11_INPUT_PER_INSTANCE_DATA;
layout[3].InstanceDataStepRate	= 1;

layout[4].SemanticName		= "TEXCOORD";
layout[4].SemanticIndex		= 2;
layout[4].Format		= DXGI_FORMAT_R32G32B32A32_FLOAT;
layout[4].InputSlot		= 1;
layout[4].AlignedByteOffset	= D3D11_APPEND_ALIGNED_ELEMENT;
layout[4].InputSlotClass	= D3D11_INPUT_PER_INSTANCE_DATA;
layout[4].InstanceDataStepRate	= 1;

layout[5].SemanticName		= "TEXCOORD";
layout[5].SemanticIndex		= 3;
layout[5].Format		= DXGI_FORMAT_R32G32B32A32_FLOAT;
layout[5].InputSlot		= 1;
layout[5].AlignedByteOffset	= D3D11_APPEND_ALIGNED_ELEMENT;
layout[5].InputSlotClass	= D3D11_INPUT_PER_INSTANCE_DATA;
layout[5].InstanceDataStepRate	= 1;

...

The post I linked makes it look like I can just do something like this:
VOut vs(float4 pos : POSITION, float4 col : COLOR, float4x4 ins : TEXCOORD)
{
VOut output;

pos = mul(pos, ins);
...

The shader seems to compile fine, but the runtime immediately complains that:
D3D11: ERROR: ID3D11Device::CreateInputLayout: The provided input signature expects to read an element with SemanticName/Index: 'TEXCOORD'/1, but the declaration doesn't provide a matching name. [ STATE_CREATION ERROR #163: CREATEINPUTLAYOUT_MISSINGELEMENT ]

On creation of the input layout. Edited by clashie

Share this post


Link to post
Share on other sites
Actually I'm retarded and solved that. I done goofed somewhere and I wasn't giving CreateInputLayout() the right number. I have pretty triangles spread about again.

Still curious as to why the MSDN list of semantics is apparently incomplete, and if I should be trying to pad stuff out to a certain size.

Share this post


Link to post
Share on other sites

Structure padding is always better, but if you don't want it then use the pragma directive to overcome memory alignment for each object..

 

From my experience with MSDN, it generally addresses only whatever is introduced as part of Microsoft's code.

Worldview is more of a graphics concept and they seem to assume that it is already known to programmers, by default.

 

Glad you could solve the issue on your own!

Share this post


Link to post
Share on other sites
Seems kind of weird that they just leave things off their list, but whatever, I guess it doesn't really matter.

Now that things are working, I made it into a starfield. Pretty neat how I can just draw the whole thing in one call, and they can all be uniquely scaled, rotated, etc.
https://dl.dropbox.com/u/10565193/d/starfield.png


edit: Also solving things shortly after I post seems to be a trend of mine. I'm apparently incapable of working things out before I get annoyed and go to ask for help. smile.png Edited by clashie

Share this post


Link to post
Share on other sites
Nice, a "the first triangle"-starfield. Congrats.
 
Still curious as to why the MSDN list of semantics is apparently incomplete, and if I should be trying to pad stuff out to a certain size.

A complete list would be quite big,... since you can name them anyway you want (except for the system value semantics) as long as they match.

Share this post


Link to post
Share on other sites

just to be clear, MSDN does not specify WORLDVIEW because you can use any semantic name you want. WORLDVIEW is not a system value. you could have named it WV, or WORLDVIEWMATRIX if you wanted, as long as you have matching names in both the input layout and shader parameters

 

you could even use POS instead of POSITION, or TEXTURECOORDINATE instead of the common TEXCOORD if you wanted. system values (SV_) are values the non-programmable stages of the graphics pipeline such as the rasterizer stage can supply or take in. These are all documented on MSDN as you've seen.

 

I don't know why microsoft doesn't mention that you can change the semantic name as long as its not a system value, but i'm guessing its to keep things standard

Edited by iedoc

Share this post


Link to post
Share on other sites

The docs for semantics are really weird because they cover both DX9-style HLSL (SM2.x and SM3.0) as well as DX10/DX11-style HLSL (SM4.x and SM5.0). In DX9 there were no user-defined semantics, there was a fixed set of semantics. These are all of the uppercase semantics that you see in the first two tables on that documentation page (POSITION, TEXCOORD, COLOR, etc.). When DX10 came along they extended this page to include the SV_ system-value semantics as well, but that page alone doesn't really make it clear that all non-system-value semantics are completely arbitrary in DX10/DX11. The issue is a made a little bit worse by the fact that a lot of people still use the DX9 semantics in their DX10/DX11 shader code, even though they don't have to anymore.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628318
    • Total Posts
      2982043
  • Similar Content

    • By GreenGodDiary
      SOLVED: I had written 
      Dispatch(32, 24, 0) instead of
      Dispatch(32, 24, 1)  
       
      I'm attempting to implement some basic post-processing in my "engine" and the HLSL part of the Compute Shader and such I think I've understood, however I'm at a loss at how to actually get/use it's output for rendering to the screen.
      Assume I'm doing something to a UAV in my CS:
      RWTexture2D<float4> InputOutputMap : register(u0); I want that texture to essentially "be" the backbuffer.
       
      I'm pretty certain I'm doing something wrong when I create the views (what I think I'm doing is having the backbuffer be bound as render target aswell as UAV and then using it in my CS):
       
      DXGI_SWAP_CHAIN_DESC scd; ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); scd.BufferCount = 1; scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHADER_INPUT | DXGI_USAGE_UNORDERED_ACCESS; scd.OutputWindow = wndHandle; scd.SampleDesc.Count = 1; scd.Windowed = TRUE; HRESULT hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &gSwapChain, &gDevice, NULL, &gDeviceContext); // get the address of the back buffer ID3D11Texture2D* pBackBuffer = nullptr; gSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target gDevice->CreateRenderTargetView(pBackBuffer, NULL, &gBackbufferRTV); // set the render target as the back buffer CreateDepthStencilBuffer(); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); //UAV for compute shader D3D11_UNORDERED_ACCESS_VIEW_DESC uavd; ZeroMemory(&uavd, sizeof(uavd)); uavd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; uavd.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; uavd.Texture2D.MipSlice = 1; gDevice->CreateUnorderedAccessView(pBackBuffer, &uavd, &gUAV); pBackBuffer->Release();  
      After I render the scene, I dispatch like this:
      gDeviceContext->OMSetRenderTargets(0, NULL, NULL); m_vShaders["cs1"]->Bind(); gDeviceContext->CSSetUnorderedAccessViews(0, 1, &gUAV, 0); gDeviceContext->Dispatch(32, 24, 0); //hard coded ID3D11UnorderedAccessView* nullview = { nullptr }; gDeviceContext->CSSetUnorderedAccessViews(0, 1, &nullview, 0); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); gSwapChain->Present(0, 0); Worth noting is the scene is rendered as usual, but I dont get any results from the CS (simple gaussian blur)
      I'm sure it's something fairly basic I'm doing wrong, perhaps my understanding of render targets / views / what have you is just completely wrong and my approach just makes no sense.

      If someone with more experience could point me in the right direction I would really appreciate it!

      On a side note, I'd really like to learn more about this kind of stuff. I can really see the potential of the CS aswell as rendering to textures and using them for whatever in the engine so I would love it if you know some good resources I can read about this!

      Thank you <3
       
      P.S I excluded the .hlsl since I cant imagine that being the issue, but if you think you need it to help me just ask

      P:P:S. As you can see this is my first post however I do have another account, but I can't log in with it because gamedev.net just keeps asking me to accept terms and then logs me out when I do over and over
    • By mister345
      Does buffer number matter in ID3D11DeviceContext::PSSetConstantBuffers()? I added 5 or six constant buffers to my framework, and later realized I had set the buffer number parameter to either 0 or 1 in all of them - but they still all worked! Curious why that is, and should they be set up to correspond to the number of constant buffers?
      Similarly, inside the buffer structs used to pass info into the hlsl shader, I added padding inside the c++ struct to make a struct containing a float3 be 16 bytes, but in the declaration of the same struct inside the hlsl shader file, it was missing the padding value - and it still worked! Do they need to be consistent or not? Thanks.
          struct CameraBufferType
          {
              XMFLOAT3 cameraPosition;
              float padding;
          };
    • By noodleBowl
      I was wondering if anyone could explain the depth buffer and the depth stencil state comparison function to me as I'm a little confused
      So I have set up a depth stencil state where the DepthFunc is set to D3D11_COMPARISON_LESS, but what am I actually comparing here? What is actually written to the buffer, the pixel that should show up in the front?
      I have these 2 quad faces, a Red Face and a Blue Face. The Blue Face is further away from the Viewer with a Z index value of -100.0f. Where the Red Face is close to the Viewer with a Z index value of 0.0f.
      When DepthFunc is set to D3D11_COMPARISON_LESS the Red Face shows up in front of the Blue Face like it should based on the Z index values. BUT if I change the DepthFunc to D3D11_COMPARISON_LESS_EQUAL the Blue Face shows in front of the Red Face. Which does not make sense to me, I would think that when the function is set to D3D11_COMPARISON_LESS_EQUAL the Red Face would still show up in front of the Blue Face as the Z index for the Red Face is still closer to the viewer
      Am I thinking of this comparison function all wrong?
      Vertex data just in case
      //Vertex date that make up the 2 faces Vertex verts[] = { //Red face Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), //Blue face Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), };  
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By turanszkij
      If I do a buffer update with MAP_NO_OVERWRITE or MAP_DISCARD, can I just write to the buffer after I called Unmap() on the buffer? It seems to work fine for me (Nvidia driver), but is it actually legal to do so? I have a graphics device wrapper and I don't want to expose Map/Unmap, but just have a function like void* AllocateFromRingBuffer(GPUBuffer* buffer, uint size, uint& offset); This function would just call Map on the buffer, then Unmap immediately and then return the address of the buffer. It usually does a MAP_NO_OVERWRITE, but sometimes it is a WRITE_DISCARD (when the buffer wraps around). Previously I have been using it so that the function expected the data upfront and would copy to the buffer between Map/Unmap, but now I want to extend functionality of it so that it would just return an address to write to.
  • Popular Now