Sign in to follow this  
hick18

DX11 DirectX 11 Questions

Recommended Posts

So Dx11 comes with 2 new buffers, giving a total of 3 buffers

- Constant Buffer
- Read/Write Buffer
- Read/Write Structured Buffer

The last 2 are the new ones, that are similar to Constant buffers, only that they can be written to in the Compute and Pixel shaders. Whats the purpose of the structured buffer though? is the setting up of the structure size purely a convenience factor? Because it would be just as easy to fill a normal buffer with the data, and do your own indexing in the shader.

If you dont want the ability to write, do the last 2 hold any benefit over constant buffers?

Can someone suggest some examples of where you would write to a buffer using a pixel shader?

Ive noticed that the geometry shader is actually after the tessellation stage. I find this odd. When would you ever need to create geometry after the mesh has been tessellated? surely would have made more sense to put it before.

Share this post


Link to post
Share on other sites
Hi.

There are even more new buffer types, look better. None of them are anything like constant buffers!!! Recall that constant buffers are very limited in size. All the new types are SRV/UAV, which simply means they are accessed by texture sampling units (or what's the proper name) as all the other textures and buffers. The difference is that you can now scatter (not only gather) to some of them. In another thread here, we mention an example of writing to a buffer in a PS and that would be for Bokeh (using AppendBuffer). Structured buffers are just convenience buffers and I'd say pretty neat.

The fact that the geometry stage is after tessellation is pretty logical, too. Tessellation actually doesn't really duplicate or "spawn" geometry, it just "refines" it (there is some topology involved) and a Domain Shader is simply just a kind of Vertex Shader! You cannot duplicate geometry for a cube map rendering using tessellation any easily, for example. That's why GS comes after DS and it doesn't matter where the input to GS comes from.

Share this post


Link to post
Share on other sites
[quote][color=#1C2837][size=2]There are even more new buffer types, look better.[/size][/color][/quote]

What are the others?


[quote][color=#1C2837][size=2]The fact that the geometry stage is after tessellation is pretty logical, too. Tessellation actually doesn't really duplicate or "spawn" geometry, it just "refines" it (there is some topology involved) and a Domain Shader is simply just a kind of Vertex Shader! You cannot duplicate geometry for a cube map rendering using tessellation any easily, for example. That's why GS comes after DS and it doesn't matter where the input to GS comes from. [/size][/color][/quote]

The reason I find it odd, is that the general workflow for using a geometry shader was to add new geometry on the fly. It seems logical that you would want to add geometry in the geometry shader, and then further refine this with tessellation as necessary. Take for example the case where you wanted to transform points into random 3d shapes,[font="sans-serif"][b] [/b][size="2"]something like Icosahedron's. If the tessellation was after the GShader, then these could be automatically further refined by the tessellation stage (which of course could still be done in the GShader). Im just trying to understand the benefit of having the tessellation before the GShader. [/size][/font]

Share this post


Link to post
Share on other sites
[quote]What are the others?[/quote]
-> [url="http://msdn.microsoft.com/en-us/library/ff471359%28v=VS.85%29.aspx"]http://msdn.microsof...v=VS.85%29.aspx[/url]

[quote]The reason I find it odd, is that the general workflow for using a geometry shader was to add new geometry on the fly. It seems logical that you would want to add geometry in the geometry shader, and then further refine this with tessellation as necessary. Take for example the case where you wanted to transform points into random 3d shapes,[b] [/b]something like Icosahedron's. If the tessellation was after the GShader, then these could be automatically further refined by the tessellation stage (which of course could still be done in the GShader). Im just trying to understand the benefit of having the tessellation before the GShader. [/quote]

There is huge difference between GS and tessellation purpose. Although GS can be used to do tessellation algorithms, it can do MUCH more and cannot do many things as effectively (and massively) as a tessellator, on the other hand. GS can spawn new geometry of different types. Tessellation just "refines" geometry, that means that it adds new vertices/edges into the existing primitives. But AFAIK, there isn't a way of turning one triangle into two triangles that would [b]not[/b] share their vertices in the topology using SM5 tessellation.

Still, if you want to achieve your workflow, then just first expand your points into icosahedrons (without a tessellation stage!!!) and then [b]feed back[/b] your newly generated geometry to the tessellation stage for refinement/displacement/whatever. And then perhaps continue with yet another GS that'd "duplicate" them for each side of cubemap at once (or not). There will not be a great performance hit, all data will stay on GPU and the host will just issue two draw calls (pixels will be rasterised just once, of course, at the very end).

Share this post


Link to post
Share on other sites
Constant buffers and the other buffer types are not really the same at all. Constant buffers are intended for small amounts heterogeneous data, regular buffers are intended for large amounts of homogeneous data.

Structured buffers are for when you have a buffer containing a user-defined structure of data that you'd like to look up by index. Without structured buffer implementing this would require lots of tedious and error-prone format conversions, unpacking, and address calculations in the shader.

pcmaster already mentioned that you can use an AppendStructuredBuffer in a pixel shader to push out data from a subset of your pixels, which I used in a sample to implement a bokeh effect using point sprites. Another example is AMD's order independent transparency demo, where instead of writing out pixel colors to a render target they used atomic operations on buffers to implement per-pixel linked lists.

The geometry shader is directly tied to both the stream out and rasterization stages, both of which require fully-formed primitive and not un-tessellated patches. Also most common and well-suited use cases for geometry shaders are generating fins, generating point sprites, and rendering geometry to multiple cube map faces/shadow map cascades in a single draw call. You would never want to do any of those things before tessellation. You also wouldn't want to just expand points to arbitrary geometry...expansion in a geometry shader can be very expensive and you want to minimize the number of output vertices as much as possible.

Share this post


Link to post
Share on other sites
One further convenience with the StructuredBuffer is that it can be used with the Append/Consume functionality with a whole structure. So if you have a particular data structure that you are using (as a particle state for example) then you can append and consume directly with complete structures instead of trying to manage the individual pieces of data.

In addition to the other points made about the geometry shader, don't forget that it can also reduce data as well as introduce it. After the tessellation is performed, if you want to cull unnecessary primitives before they get rasterized then the geometry shader can make the decision not to pass that primitive along. The GS can also change the topology type, so even if you tessellate triangles, then you can still convert them to lines or points if you want... It is one of the more flexible pipeline stages, and usually can be used for some unconventional and/or creative algorithms.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628277
    • Total Posts
      2981770
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By turanszkij
      If I do a buffer update with MAP_NO_OVERWRITE or MAP_DISCARD, can I just write to the buffer after I called Unmap() on the buffer? It seems to work fine for me (Nvidia driver), but is it actually legal to do so? I have a graphics device wrapper and I don't want to expose Map/Unmap, but just have a function like void* AllocateFromRingBuffer(GPUBuffer* buffer, uint size, uint& offset); This function would just call Map on the buffer, then Unmap immediately and then return the address of the buffer. It usually does a MAP_NO_OVERWRITE, but sometimes it is a WRITE_DISCARD (when the buffer wraps around). Previously I have been using it so that the function expected the data upfront and would copy to the buffer between Map/Unmap, but now I want to extend functionality of it so that it would just return an address to write to.
    • By mister345
      Trying to write a multitexturing shader in DirectX11 - 3 textures work fine, but adding 4th gets sampled as black!
      Could you please look at the textureClass.cpp line 79? - I'm guess its D3D11_TEXTURE2D_DESC settings are wrong, 
      but no idea how to set it up right. I tried changing ArraySize from 1 to 4, but does nothing. If thats not the issue, please look
      at the LightShader_ps - maybe doing something wrong there? Otherwise, no idea.
          // Setup the description of the texture.
          textureDesc.Height = height;
          textureDesc.Width = width;
          textureDesc.MipLevels = 0;
          textureDesc.ArraySize = 1;
          textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
          textureDesc.SampleDesc.Count = 1;
          textureDesc.SampleDesc.Quality = 0;
          textureDesc.Usage = D3D11_USAGE_DEFAULT;
          textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
          textureDesc.CPUAccessFlags = 0;
          textureDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
      Please help, thanks.
      https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Texture.cpp
       
    • By GameDevCoder
      I have to learn DirectX for a course I am studying. This book https://www.amazon.co.uk/Introduction-3D-Game-Programming-Directx/dp/1936420228 I felt would be great for me to learn from.
      The trouble is the examples which are all offered here http://www.d3dcoder.net/d3d11.htm . They do not work for me. This is a known issue as there is a link on the examples page saying how to fix it. I'm having difficulty with doing this though. This is the page with the solution http://www.d3dcoder.net/Data/Book4/d3d11Win10.htm.
      The reason why this problem is happening, the book was released before Windows 10 was released. Now when the examples are run they need slight fixes in order for them to even work. I just can't get these examples working at all.
      Would anyone be able to help me get the examples working please. I am running Windows 10 also just to make this clear, so this is why the examples are experiencing the not so desired behaviour. I just wish they would work straight away but there seems to be issues with the examples from this book mainly because of it trying to run from a Windows 10 OS.
      On top of this, if anyone has any suggestions with how I can learn DirectX 11 i would be most grateful. Thanks very much. I really would like to get them examples working to though from the book I mentioned.
      Look forward to reading any replies this thread receives.
       
      GameDevCoder.


      PS - If anyone has noticed. I asked this about 1 year ago also but this was when I was dabbling in it. Now I am actually needing to produce some stuff with DirectX so I have to get my head round this now. I felt at the time that I sort of understood what was being written to me in response to my thread back then. I had always been a little unsure though of being absolutely sure of what was happening with these troublesome examples. So I am really just trying to get to the bottom of this now. If anyone can help me work these examples out so I can see them working then hopefully I can learn DirectX 11 from them.
       
      *SOLUTION* - I was able to get the examples running thanks to the gamedev.net community. Great work guys. I'm so please now that I can learn from this book now I have the examples running.
      https://www.gamedev.net/forums/topic/693437-i-need-to-learn-directx-the-examples-for-introduction-to-3d-programming-with-directx-11-by-frank-d-luna-does-not-work-can-anyone-help-me/?do=findComment&comment=5363013
    • By DiligentDev
      Hello!
      I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. It also supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. The engine contains shader source code converter that allows shaders authored in HLSL to be translated to GLSL.
      The engine currently supports Direct3D11, Direct3D12, and OpenGL/GLES on Win32, Universal Windows and Android platforms.
      API Basics
      Initialization
      The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode:
      #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ...  GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State
      Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.)
      Creating Shaders
      To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member:
      SHADER_SOURCE_LANGUAGE_DEFAULT  - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL  - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL  - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] =  {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object
      To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format:
      // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below:
      // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO:
      m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources
      Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object:
       
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state:
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details.
      Setting the Pipeline State and Invoking Draw Command
      Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context:
      // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Build Instructions
      Please visit this page for detailed build instructions.
      Samples
      The engine contains two graphics samples that demonstrate how the API can be used.
      AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. It can also be thought of as Diligent Engine’s “Hello World” example.

       
      Atmospheric scattering sample is a more advanced one. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. 

       
      The engine also includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. 

      Integration with Unity
      Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.

       
  • Popular Now